TestCaddy User Manual


User Manual for TestCaddy version 1.8.0.1064   Last Updated: 24 October 2014    For the latest User Manual version see the TestCaddy website here.

1 Introduction

TestCaddy is a software testing tool that allows you to create Test Sets for many different clients, projects or software programs. Test Sets are readily edited or updated, easily accessed for regression testing and TestCaddy maintains a full history of testing cycles and results. TestCaddy allows easy monitoring of progress through powerful reporting tools.

1.1 No More Confused Databases!

Many organisations continue to use spreadsheets for software testing. These rapidly become confusing and sometimes unreadable, particularly when more than one person needs to access the data. TestCaddy is a tool that will save time and money, whether your testing is done by developers or by a dedicated testing team.

1.2 What Is TestCaddy?

TestCaddy is essentially a database of Test Sets written for each customer and product to be tested, which can then be used for regression testing. A full history of testing is retained and at any time you can review and report on progress with your testing project, Tests can be updated or new Tests added at any time. TestCaddy is used in conjunction with a bug-tracker such as Bugzilla, or Jira. The defect associated with a failed Test can be logged in your bug-tracking tool of choice. TestCaddy has close integration with Jira.


2 Setup

2.1 What Do I Need?

 System requirements are minimal. Any computer running Windows is generally sufficient. For full details refer to Installation Details.


2.2 How Do I Install TestCaddy?

Follow the step by step instructions on our Download page.

The TestCaddy installer will install any missing pre-requisites. Alternatively they can be installed manually (links available on the Download page) prior to running the TestCaddy installer. Once the TestCaddy installer has finished, double-click on the TestCaddy icon on your desktop.

The Configuration Wizard will guide you through the installation process. You have the option of installing SQL server or configuring the current computer to connect to a specific SQL server. Select the option to install SQL Express if you want TestCaddy to stand alone from the rest of the IT infrastructure in your company. An IT Administrator can always move your TestCaddy database to a company SQL server at a later time. (For further information on which option to select for your situation see FAQs (Frequently Asked Questions) on our webpage.)

If connecting to an existing SQL server, you will need to know:

  1. The computer name (hostname) of the computer running SQL, and the Instance name if there is one, e.g. MyMachine\Instance1.
  2. A Username and Password that have database creation privileges on the SQL server. 

If this is the first time TestCaddy has been run against this SQL server, the Wizard will create the first product database, and add demonstration data to this database for you. At the end of the Wizard you will be asked to create yourself a new login or select an existing user. The main TestCaddy window then opens. 

Note: For more detailed instructions on download and installation refer to Installation Details.

2.3 Select a Database and Login

The first time you run TestCaddy a Configuration Wizard will assist you to connect to the SQL Server you wish to use, or to install SQL Express on your workstation. The Wizard will also help you to create and then select a database into which you can write Tests - your first product database. At the end of the Wizard you will be asked to select an existing user, or to create a new login for yourself. The main TestCaddy window will then open.

2.4 Demonstration Data

TestCaddy comes with the option of having some demonstration data loaded when it is first opened. The folders can be renamed, copied, moved, deleted, or new folders can be added. Similarly Test Sets, Tests and History Items can be edited, copied, moved, renamed or deleted. 
We recommend that when you have explored TestCaddy using the Demo Data you create a new product database to begin work on your real product (File>Database Manager).

3 Basic Usage

3.1 Introduction to Main Concepts

3.1.1 Naming and Terms

Not all testers use the same nomenclature and there may be some new concepts to understand when using TestCaddy for the first time.  As a guide, some of the more commonly used terms are introduced here.

For convenience, individual Tests are grouped into Test Sets. (Any number of Tests can be included in a Test Set.) These may also be referred to as a test suite or as test scripts.

Includes all the Tests in the product database (i.e. all the Tests written for a product).

Work on a product is usually divided into Projects. 

Each Project may be subdivided into Folders. Test Sets are written for each Folder as work proceeds. Some teams may use a Test Set for each "sprint" in agile software development. The Tests used during one phase can easily be copied or edited for use in a later phase.

In TestCaddy all Tests are conveniently grouped according to the Functional Areas of the product (e.g. installation, web page, security features, etc.). This assists with locating Tests, and may also be useful for assigning work to members of the testing team.

Each time a Test is run, TestCaddy stores a record of the date and time of the run, the tester, and the status (Passed, Failed, or other options).

3.1.2 Overview of the TestCaddy Window Layout

TestCaddy has, by default, one main display window on which there are three panes. 
The upper left pane is called the Projects Panel. This opens with a treeview of Project Folders, and sub-folders (phase folders), which can be opened to show the Test Sets you have created. 
The lower left pane is called the Active Set Panel. This opens to display the detail of the Test Set selected in the Projects Panel above. Again a treeview is used and all the Tests in the set are grouped into Folders (Functional Area Folders) reflecting the functional areas of the product. Below each Test are sub-items which record the history of each run of the Test. 
The third pane on the right side of the TestCaddy window is called the Work Area. This displays information on a particular Test selected in the Active Set Panel. The Test steps grid in the Work Area displays all the steps within a Test. When a step is selected the details appear in the fields below:

Note: Most fields in the Work Area are only enabled when either editing or running a Test. 

Tip: Some actions are provided as buttons across the top of the page, but most functionality is reached by "right click' menu items or through keyboard shortcuts.

3.1.3 Modes

TestCaddy runs in a number of modes: 

This is the normal mode. Click on the Project Test Sets button to return to this mode. In this mode you can view the Tests in Test Sets and their Folders. Limited editing is allowed in this mode. For example, you can create a new Test, Test Set, or Project Folder.

This is the only mode where major organisational changes can be made, e.g. creating and moving Functional Areas, and reordering Tests. Click on the Master Test Library button in the upper left toolbar and all Tests in the product database are displayed in the Test Set area

This is the mode used for editing or selecting which Tests are in a particular Test Set. Select a Test Set in the Projects Panel and then click on the Select Tests button found at the top of the Active Set Panel. All Tests in the master Test Set are displayed, and the Test steps are shown when a Test is selected for easy recognition of the correct Test. 

Note: Some functions of the Master Test Library mode (e.g. create new Test) are also available. 

This mode is used for writing a Test, or adding or editing steps to a Test. To enter this mode, select a Test and then click on Edit Test. To exit this mode, use the Save or Cancel buttons to return to Browse mode. 

This mode is used to run Tests and to record their status. TestCaddy enters this mode when you click on Run Test (or Continue Test and similar). The Expected Result Field is shown and you can set the status of individual Test steps or the entire Test. The Save and Cancel buttons let you exit this mode.

This mode also has its own saveable layout, so that when running Tests the TestCaddy window moves out of the way of the application being tested.

Note: Test steps can be edited in this mode.

 


To help you better understand TestCaddy modes, here are some quick how to answers: 

How do I ... 

Click on the Project Test Sets button in the upper left toolbar, then expand the Folder(s) related to your Project. Click on a Test Set and the Tests in that set will be shown in the Active (Test) Set Panel (the lower treeview). In the treeview you can also see which Tests are assigned to which tester. You can use the filter button to limit the visible Tests to those assigned to yourself. 

Click on the Master Test Library button in the upper left toolbar to enter Master Test Library mode. Then in the lower panel, right click on items in the tree to see available actions.

Click on the Project Test Sets button, select the Test Set to which you wish to add, then click on the Select Tests button in the toolbar that splits the Projects Panel and the Active (Test) Set Panel. 

Click on Project Test Sets, select Test Set (add the Test if it is not already in the set), then click on Run Test in the upper right toolbar.


3.1.4 Master Test Library

The Master Test Library can be viewed by clicking on Master Test Library in the Projects Panel toolbar. 
Whenever a Test is created it is automatically added to the Master Test Library (for the current product database). If the Master Test Library button is clicked, all the Tests in the current database are shown in the lower treeview, and they are shown grouped according to Functional Area. This is the place to organise your Tests, i.e. to make new Functional Areas, move Tests between Functional Areas, create new Tests and order Tests. 

Note: These changes affect all Test Sets and all team members, so they are deliberately a little harder to find in the product and TestCaddy will remind you of their significance.


3.1.5 Test Labels & Numbering

Each Test created has a numeric test label and also a unique identifier (UID): both are displayed under the Test Properties tab in the Work Area when the Test is selected. 

Tip: The test label should be used when discussing a particular Test. 

The first part of the numeric test label refers to the Functional Area Folder where the Test resides, so if a Test is moved within a Folder it retains its label. However, if it is moved into another Functional Area Folder it will receive a different numeric label. (A warning box alerts the user that this will occur, since the changes affect all users.) If a Test is copied, the copy will automatically receive a new UID and a new numeric label. 

Note: Numbering of Tests may not appear to be consecutive. Tests may have been moved. (The order in which Tests appear can be changed in the Master Test Library.) The numeric labels assigned to Tests which have been deleted, are not utilised again.



3.1.6 Overall Test Status

When a Test has been run, the history shows as a sub-item in the Active Set Panel. An overall status is indicated for the run, and the status of the most recent run is displayed in icon form for the Test itself.

If the conditions for neither Passed or Failed are met:

3.2 Creating and Editing Tests

For any Test the following fields are available: For each Test step the following fields are available: 

A free text field. This should describe the actions the tester should carry out for that particular Test step. 

A free text field. This should describe what the tester should be expecting to see or observe after completing the actions in the description for that particular Test step.  When you run the Test you will also get an Actual Result Field and a step status Drop Down box.
It is possible to view or edit any of these fields as follows:
3.2.1 View a Test
  1. Click on the Project Test Sets button (upper left).
  2. Select a Test Set to open from the treeview in the Projects Panel. (Note that if you have set up Functional Area Folders then you will need to expand the one that contains your Test.)
  3. From the treeview in the Active Set Panel (lower left of screen) select the Test you wish to view. The Work Area now shows the Test steps grid, listing all the steps within a Test. Selecting any Test step will reveal details in the Description and Expected Result Fields below.
Note: Functional Area Folders expand to show individual Tests. If there are no Tests in the Test Set you need to write a new Test or click on Select Tests to add Tests from the Master Test Library. 

3.2.2 Create a Test

  1. Select the Test Set you want in the Projects Panel. 
  2. Right click in the Active Set Panel (lower left area) and select New Test to add a new Test. (Alternatively, select a Test in the Active Set Panel and press Ctrl+N.)
  3. Give the new Test a name.
  4. Click on the Edit Test button.
  5. Briefly outline the first step by typing an instruction in the description box (e.g. Open window and check all links).
  6. Briefly describe the expected outcome by typing in the expected box (e.g. All links should be working).
  7. Press Tab. TestCaddy will enter your first step in the Test step grid.
  8. Click on the Test step grid and select New Step (or Ctrl+N) to add a second step.
  9. When all steps have been entered click the Save button (Ctrl+S)

Tip: All Tests are saved into the Master Test Library.  If you create a Test in a Project Test Set (as in the example above) then TestCaddy automatically creates the Test in the Master Test Library and then adds the Test into the Project Test Set you are viewing. (View the Master Test Library by clicking on the Master Test Library button in the upper left toolbar).


3.2.3 Edit a Test

To alter steps in a Test:

  1. Select the Test to edit in the Active Set Panel.
  2. Click on Edit Test (upper toolbar above Work Area).
  3. Click on the step you wish to edit.
  4. Edit the text, or add new text in the Test Description Field and/or in the Expected Result Field.
  5. Click Save (or Ctrl+S).
To add steps to a Test:
  1. Select the Test to edit in the Active Set Panel.
  2. Click on Edit Test (upper toolbar above Work Area).
  3. Click on the next blank line in the Test step grid (or right click on an existing line) and select New Step from the Drop Down menu.
  4. Add text in the Test Description Field and/or in the Expected Result Field.
  5. Click Save (or Ctrl+S).
To move steps within a Test:
  1. Select the Test to edit in the Active Set Panel.
  2. Click on Edit Test (upper toolbar above Work Area).
  3. Right click on the line in the Test step grid that you want to move and select Move Up or Move Down from the Drop Down menu.
  4. Confirm your move by clicking OK in the dialog box.
  5. Click Save (or Ctrl+S).
To delete steps within a Test:
  1. Select the Test to edit in the Active Set Panel.
  2. Click on Edit Test (upper toolbar above Work Area).
  3. Right click on the line in the Test step grid that you want to delete and select Delete from the Drop Down menu.
  4. Confirm your deletion by clicking OK in the dialog box.
  5. Click Save (or Ctrl+S).
Note: Test steps can also be edited when running a Test. At the end of running a Test, TestCaddy will always ask if you want to save any changes just for that run of the Test (History Item), or to change the Test in the Master Test Library so that all future runs of that Test from any Test Set will incorporate the changes made.

3.3 Building Test Sets

3.3.1 Test Sets

Tip: Depending on the way your team works, grouping Tests into Test Sets may be done for a number of reasons. With other tools teams often use a Test Set to break a Project into groups of Tests that Test the same functional area of a product. In TestCaddy all Test Sets are already grouped by Functional Area to achieve this goal. This allows Test Sets to be used for other common reasons such as: 

Sometimes Test Sets are simply used to allow easy assignment of work to team members.

3.3.2 Create a Test Set

  1. Right click a Project or Folder in the Projects Panel.
  2. Select New Test Set from the Drop Down menu.
  3. Type in a name for this New Test Set and press Enter.

The set will be created, ready to be edited. TestCaddy automatically prefixes the name with a unique identifier for the Folder, giving a handy way to refer to a Folder when talking to colleagues.
 

3.3.3 Build or Edit a Test Set

You can build a Test Set either by adding Tests to an existing Test Set, or by creating a new Test Set.
  1. Select a Test Set in the Projects Panel. 
  2. Click on the Select Tests button in the Active Set Panel toolbar. All Tests in the Master Test Library will now be displayed as a treeview. 
  3. Select those Tests to be included in the Test Set by left clicking the checkbox next to the Test. Once selected each Test will display a light green tick. 
  4. Click Save to finish building the Test Set.

Tip: You can click a Functional Area Folder to select or de-select all Tests in that Folder and its sub-folders.

You can remove Tests from a Test Set but caution is advised, as it may be better to mark a Test as Not Applicable.
  1. Select a Test Set in the Projects Panel. 
  2. Click on the Select Tests button in the Active Set Panel toolbar. All Tests in the Master Test Library will now be displayed as a treeview. Those Tests included in the Test Set will display a tick in the checkbox.
  3. Click on the Allow Removal button at the lower left of the Active Set Panel. The button will change to read Removal Allowed.
  4. Click on a ticked checkbox to remove that Test from a Test Set. A warning dialog box will appear. Select Yes in this box and the tick will disappear from the checkbox.
  5. Click Save to finish editing the Test Set.

3.4 Running Tests

Tip: Tests can only be run from a Project Test Set. It is not possible to select a Test in the Master Test Library and to run it from that location.  If the Test you wish to run is not in a Project Test Set, then you can add it (see Section 3.3.3 above). Select the Test Set you wish to use or create a new one and then click on the Select Tests button in the Active Set Panel (lower left).  This will show you all Tests in the Master Test Library and you can 'check' the checkbox next to any Test you wish to add to the Test Set and then click Save.

3.4.1 Running a Test

  1. Select a Test Set in the Projects Panel.
  2. Select a Test in the Active Set Panel.
  3. Click on the Run Test button in the upper toolbar in Work Area.
  4. Select step 1 in the Test step grid.
  5. Read the description and the expected result. 
  6. Go to the program being tested and perform the actions as described.
  7. Type in the actual result in the Actual Result Field. 
  8. Select Pass/Fail (or other appropriate result) in the status box. 
  9. Repeat with each step.
  10. Click on Save. 
  11. You will be asked to select or confirm the environment, version and build of the product before the Test result is saved. (Note that this requirement can be turned off in Settings). 

Note: The result is shown as the most recent History Item under the selected Test in the Active Set Panel. The current overall status of the Test is also determined from your most recent run.

Note: Because Test steps can be edited while running a Test, at the end of the run TestCaddy will ask you if you want to save the changes only for that run of the Test (History Item), or to change the Test in the Master Test Library so that all future runs of that Test from any Test Set will use the changes you have made.

3.4.2 Status Box Shortcuts

When a Test step has been run the tester selects an outcome from the Drop Down list in the status box displayed at the bottom of the Work Area. Test Caddy marks all steps initially as Not Run, and a Test with all steps in this state as not yet run. Choices are then:

Note: Shortcuts speed up this process and several steps may be selected at once.


3.4.3 Product Version, Build and Environment

During the running of a Test, the product version and build number are displayed in the Work Area. The product version number defaults to the most recently used version, but also has a Drop Down list of all current versions.

As new product versions become available for testing they should be entered Configuration>Versions List. A version number and a brief description may be entered, and versions can be made inactive once they are outdated. (Note: When a History Item is being edited inactive versions are still available.) Similarly, the build number defaults to that most recently used, but a Drop Down list offers all available builds, or the option of entering a new build number for the currently selected version.

The Environment and General Custom Fields can be recorded during a Test run. (Similarly they default to the most recent selections by the tester.)

When a run is saved, a confirmation dialog box asks you to confirm default options are correct if the tester has not changed each of the values displayed for Environment and General Custom Fields, version and build. If you do not wish to record these for every run of a Test, carry out the following: 

  1. Click on Configuration and, from the Drop Down menu, select Settings. A Settings box will open in a separate window.
  2. Click on the line that reads: Current User-Default-Require Environment and Build-True.
  3. Click on Edit.
  4. A dialog box offers the option to change the setting from True to False.
  5. Click on OK to change the setting.
  6. Click on the Close button to exit the Settings box.

TestCaddy will no longer require environment, version and build before saving a run.


4 Core Features

4.1 History Items 


A History Item is created each time a Test is run. The History Item records all of the Test information (Steps, Environment and General Custom Fields, notes etc.) exactly as they were entered when the Test was run.

4.1.1 Continuing a Run

If only some steps of the Test are completed, these can be saved, and the remainder left as Not Run. The Test history will show that the Test is still In Progress (or Failed if one or more steps were failed). The Continue Run item on the toolbar at the top of the Work Area displays the Test at the point it was saved, and the run may be continued.

4.1.2 Editing a History Item

Sometimes it is necessary to correct the information recorded when a Test was run. Simply select the History Item and click on Edit History in the upper toolbar of the Work Area. The Actual Result Field may be edited, and it is also possible to change the environment, version or build information.



4.2 Dashboard 


The Dashboard displays selected Reports providing information on the progress of the project.  Many of the Reports will automatically refresh as you select Folders or Test Sets in the Projects Panel.

4.3 Custom Fields



4.3.1 Custom Fields

TestCaddy's Custom Fields are split into two categories; Environment and General Custom Fields. Environment Custom Fields let you set, for example, the Operating System or SQL Server Version, and are color coded green throughout TestCaddy for ease of use. All other Custom Fields should be placed in the General Custom Field category, which is color coded blue. An example of a General Custom Field category might be Reviewed Status.

Both categories of Custom Fields can be managed through the Custom Fields Manager dialog, where you can create new Custom Fields and enter and edit lists of values available for each field (see Section 3.Y).

Custom Fields can be used with:

  1. Tests in the Master Test Library. For example, a Test may be marked as being not yet ready to be run, by using the Test Properties tab.
  2. Tests or Test Instances in a Test Set. For example, a Team Leader can note the environment they would like the Test to be run on, by using the Planning tab.
  3. Tests or Test Instances that are being run. For example, this is how a tester records which environment the Test was actually run on, by using the Test Run's main window.
As an example, a Team Leader, when assigning Tests to testers, might set an Environment Custom Field of Operating System to "WindowsXP", and set a General Custom Field of Reviewed Status to "No" in the Planning tab. The individual testers can then set the Operating System on which they actually ran the Test. The Team Leader can now filter a Test Set to find all the Tests that have been run and have their Reviewed Status set to "No", and can then check the results and change the Reviewed status to "Yes" (by editing the History Item). This allows effective communication between different levels in the testing process.

4.3.2 Managing Custom Fields

TestCaddy allows the user to set their own Environment (green) and General (blue) Custom Fields to cater fully for the testing process. Using Configuration > Custom Fields allows the creation of your own settings for Environment to clearly inform the tester which Operating System etc. the Test requires. It also allows you to set Custom Fields to specify the exact way you want your Tests to be run. 

To create these fields you will first need to create a "Drop Down", which is then accessed by right clicking on a root level Folder, Environment or General Custom Field. The Drop Down is the parent for the options you will have. Right clicking and selecting New Drop Down Items on it will allow you to create the next level of Drop Down Items. For example, if you wish to have your testers set the Operating System when they are running Tests then you would set an environment Drop Down to "OS" and your Drop Down Items underneath it to "Windows 7", "Windows Vista SP2" etc.

Note: TestCaddy also allows you to create Folders to organise your Custom Fields in the way you see fit. The levels of Folders are removed in the Custom Field selection dialogs that the users see.  This is to allow for quick entry.

You can edit your Custom Fields at any time by accessing the Manage Custom Fields menu (using Configuration > Custom Fields), selecting your Custom Field and clicking the Edit Custom Field button on the upper toolbar. This allows you to change the name, Import Notes and the Custom Field's availability in the Custom Field selection (green and blue) grids. 

Unticking the Available box means that a Custom Field is no longer intended for use. It will still be visible if already selected in a Test but cannot be saved when Editing or Running a Test. If you Edit History then Not Available options can be saved, so long as you don't change them. This allows editing of other properties of a History Item without changing the Custom Field value set at the time of the run. Custom Fields marked Not Available can still be selected in filters and Reports so that you can report on old Tests.

On the right click menu you are also given the option to delete a Custom Field. In general this should only be used for Custom Fields which have never been used, or when a Custom Field has not been used for a very long time and there is no interest in it, or it is cluttering the Custom Field user interfaces. 

When you delete a Custom Field, if you have not used it with any Tests (Planning or Run History) then it will be permanently deleted. If you have set this Custom Field in, for example, the Planning tab of a Test then you will only be given the option to mark it as deleted. The Custom Field will be removed from the users view, but the information regarding it will still be stored by TestCaddy to protect data integrity. This means it will act just like a deleted Test. Custom Fields marked as Deleted are NOT listed in filters and Reports: from the user perspective they are gone and there is no record of them having been recorded.

Custom Fields marked as Deleted can have the delete action reversed by clicking 'Show all Custom Fields' (deleted Custom Fields can only be seen if 'Show all Custom Fields' is ticked), then right clicking and selecting 'Undelete'. This restores it to full functionality.

You can also change the Type of the Custom Field if you have ticked the "Allow Type changes". This allows you to change a Folder to a Drop Down etc.  This is an advanced feature for use after importing Custom Fields from another product and hence should be used with caution. TestCaddy will notify you by marking a Custom Field as "INVALID TYPE" or "INCONSISTENT TYPE" if your changes result in a Custom Field that is in an illegal position (e.g. a Drop Down Item directly under a Folder).

4.4 Planning


4.4.1 Planning Tab

The Planning tab allows you to set which Environment and Custom Fields you want the Test to be run on, separately from the actual test run. The Planning tab can only be edited when editing a Test, not while running a Test. The fields you set on the Planning tab remain unchanged if you set them again when running the Test.

The Planning tab also allows you to write Planning notes to indicate to other testers specific instruction on the running of the Test.

4.4.2 Assigning Testers

Click on any Test in a Test Set, and use the right click menu to assign a tester for that Test. (Note: This option is not available in the Master Test Library.) Either individual Tests, or all the Tests in a particular Functional Area Folder may be assigned, and will show the name of the tester to whom they have been assigned (while still remaining available to others). A Test is only locked for the time it is actually in use. The right click menu offers three options: 

4.5 Filtering 

Filters can be created for the Active Set (lower left treeview) and for Reports and Dashboards.

4.5.1 Active Set Filtering

The "Filter" is on the Active Set Panel toolbar. The status of the filter stays constant as you move through TestCaddy's modes and when you exit and return to TestCaddy your last filter is always remembered.

This filter is useful when Selecting Tests. The specified text filter option can be used to limit Tests in the treeview to Tests that mention a certain feature, no matter what Functional Area Folder they are stored in. 

Warning: If you can't see what you are expecting in the Active Set Panel treeview then check your current filter and turn it off if necessary.

4.5.2 Reports Filtering

You can use the filter when in the Reports section to filter the Tests that your graphs display. This lets you customise your graph sets to show only the Project's progress on a specific area. Edit your selected graph and use the filter as you would if in the Active Set and Save. Your graph will now show only Tests that match your filter when viewed in either Reports or in the Dashboard tab.

For more information refer to Advanced Reports (Section 7).

4.5.3 Filter Options

You can view only the Tests of interest by enabling the filter and selecting which Tests will be displayed.

In all, there are four different filters that can be used. They allow you to see only;

Simple checkbox ticks allow any combination of the above filters to be used. For example, you can filter to see only Tests that have a status of Passed AND are Assigned to tester 'Bob'.

4.5.4 Filtering on Custom Fields

The Custom Fields section of the Filter Dialog has an SQL Query text box where you can enter SQL filter strings to refine your Custom Field filter.

The Custom Fields SQL Query box does not accept keystrokes, with the exception of the left and right arrow keys, spacebar and the backspace keys. It treats each Custom Field as a "chunk" and has special text highlighting and navigation buttons for these chunks.

The left and right arrow keys, spacebar and the backspace keys allow you to navigate and delete "chunks" of your search query. Using the arrow keys will highlight the next section of your search and pressing backspace will delete the next section to the left. These keys also have their own buttons at the bottom of the dialog as well as "OR", "AND", "Clear" and "Insert Custom Field item(s)". 

Example;

To see Tests that have been run with WinXP32 or Win7, and have a Review Status of "Yes": 

  1. Click the "Insert Custom Field item(s)".
  2. On the Drop Downs select "OS--WinXP32" and "Review Status--Yes " (making sure that the AND box at the bottom is ticked) and click Done. 
  3. Back on the main dialog, click "OR" and then click the "Insert Custom Field item(s)" button again.
  4. Select "OS--Win7" and "Review Status--Yes", then click Done.

This would give you the search query of ("OS--WinXP32" AND "Review Status--Yes") OR ("OS--Win7" AND "Review Status--Yes"). You can then choose to have your query filtered as "Contains" or "Matches Exactly", depending on how you want to define your search. You can also choose whether to filter on Planning, Last Test run, or Test Properties (or any combination of the above) by ticking the checkboxes on the dialog.


There is also a Recent Selection Drop Down, which shows the last 20 SQL filter queries. You can use this to re-use your past filters by selecting a filter from the list. This will update your dialog to the same data as that filter.

Note: When filtering on Custom Fields the filter will sometimes simplify your search query if it can, to optimise your search.



5 Advanced Usage



5.1 Performing a Quick Run (i.e. Over-riding the Current Test Status Without Test Steps) 

If you wish to change the current status of a Test, simply select the Test and click on the small arrow to the right of the status (lower right toolbar). A dialog box allows the option of a comment in the Actual Result Field, and a check box turns off the need to record version and environment.


5.2 Setting up TestCaddy for Team Use

5.2.1 Setting up a TestCaddy Team

  1. Choose a Team Leader.

Certain tasks should only need to be carried out once and then details communicated to the rest of the team. A Team Leader should be selected to carry out these tasks to avoid duplication of effort.

  1. Create the Product Database(s)

The Team Leader should create the product database(s). Although these can be created at any workstation, it is preferable for the Team Leader to take on this task. The Team Leader should consider whether to have a) one product database for each product, with Folders for each version, or b) one product database for each major version.

For example, databases for One KoffeeSoftware and one SuperEmailer might be set up as:

 
  a) KoffeeSoftware Database    or b) KoffeeSoftware v6.0 Database

Version 6.0 Folder
KoffeeSoftware v6.1 Database

Version 6.1 Folder
SuperEmailer v3.0 Database

SuperEmailer Database
SuperEmailer v4.0 Database

Version 3.0 Folder


Version 4.0 Folder


  1. Set up logins

The Team Leader may wish to set up an appropriate login name for each member of the team. New login names can be added at any time, and a login may be deactivated if necessary. Individuals may add their own login at any time.

Tip: We suggest each tester uses the same login name as used for your bug-tracking tool.

  1. Install TestCaddy

Each team member should install TestCaddy on his or her workstation and all should use the same product database(s). Team members will need the computer name (or IP Address) of the SQL server that TestCaddy is using, and the username and password required to login. (The SQL server can be set up for Windows Authentication, if that method is preferred.)

TestCaddy is designed so that users should be able to administer it themselves, e.g. through adding versions and builds, or adding user logins.

Tip: If working on two products at one time, it is necessary to run two copies of TestCaddy at once, one for each product.


5.2.2 Organizing a Project

Work on each product will probably be divided into Projects (which may be carried out in several phases), and the Projects may be divided into Folders. Folders for each Project (and sub-folders for each phase) are created in the Projects Panel. Within a Folder you can create a collection of Tests grouped into Test Sets. The hierarchy looks like this:

Product Database       

Master Test Library

                 Folder 1

                         Sub-Folder 1

                                    Test Set 1

                                    Test Set 2

                 Folder 2

                        Sub-Folder 1

                                     Test Set 1

                        Sub-Folder 2

                                     Test Set 1

  1. Create Projects, Folders and Test Sets

    To create a new Project Folder or sub-folder, right click on the product database in the Projects Panel, and use the menu to choose "New Folder". A new Project Folder is created, ready for the name to be edited. Similarly a new sub-folder can be created by right clicking a Project Folder and choosing "New Folder" from the menu.

    It is not necessary to specify all the Folders in a Project, or the number of Projects at the beginning; these are usually added as work proceeds. The number of Folders and sub-folders used is entirely optional. (The Folders created can, of course, be used as desired, to divide up testing work; but our suggestion is that each Project has a top level Folder, and sub-folders are created for each phase.)

    In each sub-folder create Test Sets as appropriate (see Section 3.3), and select groups of Tests to run in each of them. You can then assign whole Test Sets or Tests within a Set to individual testers on your team.
  2. Create Functional Areas 

Choose what the Functional Areas will be, then create them in the Master Test Library. The Test Sets and Tests are conveniently grouped according to Functional Area in the Active Set Panel. These might include, for example, installation, web page, security features, etc.

Splitting your Tests into the Functional Area Folders is recommended but not mandatory. You can group your Tests in whatever way you desire. Some teams group by testing type, such as functional tests, regression tests etc. Others may group by functionality or GUI layout, or by some combination of both groupings. Whatever is chosen, the Functional Areas and groupings can be edited or added to as work proceeds.

EXAMPLE: Functional Areas for Koffeesoftware Version 6.0.1:


Install/Uninstall/Upgrade

Reporting and Logging

Web Page

Text Entry, Selection and Deletion

Text Formatting

Security Features

Miscellaneous


Tip
: Another area such as "Non-functional testing", or a "Miscellaneous" category, may be added for convenience if some Tests do not fit readily into any Functional Area of the product.

If no Functional Areas are set up, all Tests will be listed at the top level in the Master Test Library. All TestCaddy users see the same Functional Areas in the Active Set Panel treeview They are the same for all Test Sets in the product database into which you have logged. Grouping the Tests in the Master Test Library into Functional Areas is helpful as your quantity of Tests increases, particularly when reviewing which areas of your product you have tested to assess testing coverage. Functional Areas help with breaking up or assigning work and even help with simply trying to find a particular Test. We find that Functional Areas are a great way to group Tests from a user centric perspective, helping you to review testing coverage from the customer’s point of view.

As a simple example, if you are testing a calculator application you could group Tests EITHER for elements of the screen (e.g. digit button tests, operators buttons like plus and minus tests, etc.), OR by considering the way that a user would think (e.g. by addition, subtraction etc.). Typically a combination is needed.

Functional Areas can also be created for things like exploratory sessions, where you can easily add and edit Tests as you run them to track your exploratory session.


5.3 The Right Click Menu

Much of the functionality of TestCaddy is accessed via the right click menu (or alternatively, by the use of short-cut keys). This allows the rapid building of Test Sets as work progresses. Previously written Test Sets can be cloned, renamed and edited, with some Tests omitted and new Tests added. 

Tip: Use placeholders to quickly build a new Test Set. The detail can be added later. 

If the Master Test Library is selected the right click menu offers the following options:

  1. Create a new Test or a new Functional Area.
  2. Cut, Copy, Paste, Move Up or Move Down (applies to Tests or Functional Areas).
  3. Rename either Tests or Functional Areas.
  4. Delete either Tests or Functional Areas.
  5. Assign either testers or Environment.
  6. Expand or contract nodes in the Active Set Panel.
  7. Export of the Test Set data in CSV format.

If Project Test Sets and an item in the Projects Panel are selected then the right click menu offers the following options:

  1. Create a new Folder or a new Test Set.
  2. Edit a Test Set (add or delete Tests selected from the Master Test Library).
  3. Cut, Copy, Paste, Move Up or Move Down either Test Sets or Project Folders.
  4. Rename Test Sets or Project Folders.
  5. Delete Test Sets or Project Folders.
  6. Export of the Test Set data in CSV format.

If Project Test Sets and an item in the Active Set Panel are selected then the right click menu offers all the same functionality as in the Master Test Library. 

5.4 Related Tests (Shared Steps) and Template Variables

When a Test is selected in either the Master Test Library or the Projects Panel, the right click menu gives the option to create a related test with shared steps. The Create Related Test option from the drop down menu allows three sorts of related tests to be created: Coupled Tests, Test Set Instances or Linked Tests. The three types provide different options for building on the original test, allowing it to be run across multiple environments, or with slightly different combinations of steps.

Coupled Tests share the same Test Steps, but allow Test Properties and Test Planning differences.

Test Set Instances share the same Test Steps and Test Properties but allow only Test Planning differences.  

Linked Tests share just a few Test Steps from another Test and don't share any other test information.

Note: Test Set Instances are not real tests because they are not created in the Master Test Library and can only be created in a Test Set.  They only share the lifetime of the Test Set and hence are useful because they don't clutter your Master Test Library with test variations that are only relevant for one round of testing.

Versions prior to 1.7.0 had Test Set Instances that were previously called just Instances.

Related tests can use Template Variables to assist in reusing the same test or steps with minor variations based on selected Custom Fields.   Template variables allow the Custom Fields set on each test to display in the text fields, e.g. the Description field on a test step might be  "Install on {{OS}}" which uses the OS template variable.  On each copy of the test the variable will be replaced with the OS custom field value set for that test, e.g. Windows8 on one test and Windows7 on a Related Test. 

Related Tests may be run in the same way that any other test is run. 

Often if you call a base Template Test from your Linked Tests you need to highlight something to the Tester.  Every Test (or Step) called from your Test has Call Notes  (instead of the Step Description field) which allow you to provide further information on the called steps.

Further Terminology:  Any Test which has been called by another Test is marked as a Template Test.   We refer to any Test which isn't related as a Plain Test.

Note: We do not currently support multiple recursion other than with Instances which can be made from a Plain Test, Linked Test or  Coupled Test. 

Common Usage Scenarios:

Linked Test Scenario
Linked Tests are useful when a number of tests in your library are quite different from each other but share a few common steps.  Perhaps they share Test Setup information.  Simply make a Plain Test which has the common steps and click the Template Test button to turn it into a Template Test.  Then for each Test that will call those steps, create a typical Plain Test in the Master Test Library and click "New Linked Step..." to select the Template Test with the common steps. 

Instance Scenario
If you are building a Test Set and your current project needs two instances of a test, say one to assign to  Tester1 and their Windows 2008 test environment and one to assign to Tester2 and their Windows XP environment, then you can add the test to your Test Set as normal and assign it to Tester1 and then create a Test Set Instance and assign it to Tester2.   A Test Set Instance is useful in this case because you don't need to clutter your Master Test Library with a test for a different environment that may not be needed in future projects.

Coupled Test Scenario
A good time to use a Coupled Test is if you need your Master Test Library to contain two copies of a test which only differ by Custom Field values and you need to remind your team to review these variations of a test. For example, you have the same installation test which should be run on different Operating Systems but you need to remind the team which of those are high priority and which are low. To do this create the first Test and set its Test Property Custom Fields, e.g. OS--Win8 and Priority--High and then create Coupled Tests for the second and third copies etc and set their Test Property Custom Fields to the correct Operating System and Priority, e.g. the second copy might be, OS--WinXP and Priority--Low.  When testers are selecting Tests to put in a Regression Test Set they will see that some versions of the Coupled Test are higher priority than others, and hence should run those tests to test those environments ahead of the lower priority tests.

5.4.1 Coupled Tests

Coupled Tests share the same Test Steps as their parent test, but allow Test Properties and Test Planning differences.

Coupled tests should be created where the Template Test and its Coupled tests are likely to be used across several different projects and are therefore relevant to several different Test Sets. Coupled Tests are always automatically added to the Master Test Library whether you create them in the Library or in a Test Set.

To Create a Coupled Test

1.      Right click a Test in the Active Set Panel to both select the Test and display menu options.

2.      Select Create Related Test. A second drop down level will appear.

3.      Select New Coupled Test from this second drop down. A new test will appear in the tree-view named as {Coupled: <Test name>}.

The Test Definition of the new Coupled test will appear in the Work Area. The steps will be identical to the original test, except that they will now be “Calls” rather than “Steps”. The only Step in the Coupled Test is a step that calls the Steps from the original. These steps from the original are now numbered 1-1, 1-2, etc rather than 1, 2, etc.

Note: The Edit Test button is still active, but only a limited number of functions remain available for editing the new Coupled test. Clicking the Edit Test button allows Call notes to be added to the description of the remaining Step, but the description and expected results of the various 'called' Steps can no longer be changed.

To Edit the Planning of a Coupled Test  

1.      With the Coupled Test selected in the Active Set panel, click the Planning tab at the top of the Work Area.

2.      Click the Edit Test button.

3.      Change the Assignee, Environment or General Custom Fields as required.

4.      Click the Save button at the bottom right of the Work Area to save your changes and exit the editing mode.

5.      Alternatively, click the Cancel button at the bottom right of the Work Area to exit without saving.

Note: While in the Editing Mode, it is possible to switch between the Test Definition, Test Properties and Planning tabs, but only some features may be edited; Test Properties and Test Planning, and the Call Notes on the calling Test Step.  At any time you need to click Save or Cancel to exit the Editing Mode.

5.4.2 Test Set Instances

Test Set Instances can only be accessed in the Test Set in which they were created, and are used when the Related Test is only to be run for one particular project. Use of Instances helps to avoid cluttering the Master Test Library with copies of very similar tests.

Test Set Instances are copies of the original test created only in the Test Set that cannot be edited. However, their Planning can be edited, allowing the assignment of different environments to different Instances. This allows, for example, identical tests to be run on different operating systems and for the results on each system to be separately recorded.

To Create a Test Set Instance

1.      Right click a Test in the Active Set Panel to both select the Test and display menu options.

2.      Select Create Related Test. A second drop down level will appear.

3.      Select New Test Set Instance from this second drop down. A new test will appear in the tree-view named as [1] <Test name>.

The Test Definition of the new Instance will appear in the Work Area, but in this case the Steps will be identical to the original. The Test Set Instance is an exact copy of the original Test.

Instances are particularly useful for running the same test across multiple operating systems and recording the results separately. It is also possible to assign different testers for different instances, or to note other characteristics such as reviewed status.

To Edit the Planning of a Test Set Instance

1.      With the Instance selected in the Active Set panel, click the Planning tab at the top of the Work Area.

2.      Click the Edit Test button.

3.      Change the Assignee, Environment or General Custom Fields as required.

4.      Click the Save button at the bottom right of the Work Area to save your changes and exit the editing mode.

5.      Alternatively, click the Cancel button at the bottom right of the Work Area to exit without saving.

Note: While in the Editing Mode, it is possible to switch between the Test Definition, Test Properties and Planning tabs, but only Planning Tab features may be edited. At any time you need to click Save or Cancel to exit the Editing Mode.

5.4.3 Linked Tests

Linked Tests have one or more steps that "Call"  test steps from other tests.  They can also have their own plain test steps.  The resulting mixture of test steps provides a useful test in its own right, which shares a full test or just a group of steps from another test.   This can be useful where a number of tests share common steps, for example, common pre-test setup steps could be shared.  Linked tests can be edited under the Test Definition, Test Properties and Planning tabs. This allows you to add or take away steps, alter the Test Purpose, add notes, change the Test Property Custom Fields or change the Test Planning.  As with Coupled Tests, this allows different Linked Tests to be, for example, set to use different operating systems via the Test Property Custom Fields.  In addition the ability to edit Test Definition steps allows tests to be extended or shortened as changes are made to the software being tested.

The ability to edit Test Definition steps sets Linked Tests apart from Coupled tests where only Test Property Custom Fields can be set differently to the parent Test or Test Set Instances where only Test Planning can be set differently to the parent.

To make a Linked Test, you have two option.  You can,

A. Start  by creating a  test and then add in a "Calling" test step using the New Linked Step... menu item on the right click menu on the test steps list.  It will display the Master Test Library of tests for you to select the Test or Test Step(s) you wish to link to. Or,

B. You can right click on a test and select New Linked Test from the Related Tests menu.  One Called step will be inserted which calls all the steps in the test you had selected.  

Either way, you can then edit the Linked Test to add in your non-shared steps.

Linked Tests can be edited fully (other than the called test steps' fields such as Description and Expected Result) and are completely separate from the Tests they are linked to.

When you select test steps to be called from another test, you can insert one Calling step for each step in the called test or you can have just one Calling step which calls all or a number of the called tests steps. Sometimes it is preferable to have one Linked step for each shared step (one to one relationship) but generally using one Linked step to call all the shared steps makes the test more usable when running the test.

When you run a Linked test you can set a step result for each of the called steps.  You do not set a result on the Calling step and it will always get marked with a NA (Not Applicable) status.

To Create a Linked Test

1.      Right click a Test in the Active Set Panel to both select the Test and display menu options.

2.      Select Create Related Test. A second drop down level will appear.

3.      Select New Linked Test from this second drop down. A new test will appear in the tree-view named as Linked: <Test name>.

The Test Definition of the new Linked Test will appear in the Work Area. The steps will be identical to the original test, except that they will now be “Calls” rather than “Steps”. However, unlike Coupled Tests (above) there are still plain Steps and these can be added to, removed or the order of steps can be changed. These steps from the original are now numbered 1-1, 1-2, etc rather than 1, 2, etc.

To Edit a Linked Test

1.      Click the appropriate tab (Test Definition, Test Properties, or Planning) at the top of the Work Area.

2.      Click the Edit Test button.

3.      Edit the Steps or their order, Call Notes (under the Test Definition tab), edit the Test Purpose, Test Notes, and Environment and General Custom Fields (Test Properties tab), and the Environment and General Custom Fields and Planning notes (Planning tab) as desired. Jira tickets may also be added or edited.

4.      Click the Save button at the bottom right of the Work Area to save your changes and exit the editing mode.

5.      Alternatively click the Cancel button at the bottom right of the Work Area to exit without saving.

Note: The Edit Test button is fully active for Linked Tests, meaning that a Linked Test could ultimately be edited to the extent that it bears no resemblance to the original. However, it is not possible to edit or delete 'Called Steps'. To delete these, the Step that makes the “call” must be deleted.

To Add New Steps to a Linked Test

1.      Click on the Linked Test you wish to edit.

2.      Click the Test Definition tab at the top of the Work Area.

3.      Click the Edit Linked Test button.

4.      Right click on any Step or Call in the Step box. A drop down menu will appear.

5.      Click New Step. This will create a new step at the end of the list of steps.

6.      Click in the Description box and type in a description of the new test step.

7.      Click in the Expected box and type in the expected result of the new test step.

8.      Click the Save button at the bottom right of the Work Area to save your changes and exit the editing mode.

9.      Alternatively, click the Cancel button at the bottom right of the Work Area to exit without saving.

To Add Steps from another Test to a Linked Test

1.      Click on the Linked Test you wish to edit.

2.      Click the Test Definition tab at the top of the Work Area.

3.      Click the Edit Linked Test button.

4.      Right click on any Step or Call in the Step box. A drop down menu will appear.

5.      Click Linked Step. A second drop down menu will appear.

6.      Click New Linked Step on this drop down. Test Caddy Explorer will open in a new window, showing the Master Test Library and the Tests within it grouped by Functional Area.

7.      Navigate to the desired Test and then test step by clicking on the entries within each level. (Go back by clicking on the appropriate entry in the bread crumb trail at the top of the window.)

8.      Check the checkbox(es) next to the desired Test or Test Steps  (There is no need to check each step if you want all of them)

9.   Click the Select button to add the selected Test or test step to the Linked Test.

10.  Click the Save button at the bottom right of the Work Area to save your changes and exit the editing mode.

11.  Alternatively, click the Cancel button at the bottom right of the Work Area to exit without saving.

Note: The Step that has been linked cannot be edited directly as it is now a Call. The new step that has been created actually calls the linked step from another Test, and it is only this new step which can be edited, using the right click menu to edit , delete, or move the step up or down.

It is important to note that all the steps from one test can be added to a Linked Test. In the Test Caddy Explorer window, navigate to the level where individual tests are displayed then click on the Check box to select a Test. Then click the Select button. A new step will be added to the Linked Test which calls all the steps from the selected Test. However, these 'Calls' cannot be edited or their order changed. If you wish to change their order then each step must be added separately.

As with Coupled Tests, Linked Tests can also have their Properties and Planning edited, or Jira tickets can be added.

5.4.4 Template Variables

Any Related Test (including the base Template Test)  can accept Custom Field Template Variables in any of its text fields.

Example:
A Step Description field could have the text:

"Install the product on {{OperatingSystem}}"
 
which includes the Template Variable for the "Operating System" Custom Field.

If you then select the WinXP value for the Operating System Custom Field under Test Properties, the text will change to,
"Install the product on {OperatingSystem: WinXP}"

Note that the single curly braces and the colon followed by the value indicate that the Custom Field value is being shown.  The double curly braces indicate it has not been resolved yet.  If the Custom Field value has not been set it will show with an empty value, e.g.

 {OperatingSystem: }

Typically you use such Template Variables in the base Template Test. Then Linked Tests, Instances or Coupled Tests will replace the variable with the value of the Custom Field that is set for them.  For this example a Coupled Test might have the Win8 Operating System Custom Field set, so it would read,
"Install the product on {OperatingSystem: Win8}"

Code Completion:

Related Tests offer code completion menus when you type the opening bracket character { in their text fields.   The code completion menu allows you to choose from your available Custom Fields and insert them as a Template Variable into a text field for replacement.

Note: To enable Template Variables to work on the parent test called from any related test it must  be marked as a Template test.  This is usually done automatically when you create the related test, e.g. any test which is called from a Linked Test, is the parent of a Coupled Test or Instance automatically turns into a Template Test.   If you create a new parent test and have not yet created a related test you can turn on variables manually by clicking the Template button


Advanced: You can also select the New Variable option on the code completion menu  to create your own Test Specific Custom Fields that are only available on the test you created them on (or any Related Tests).  They can be edited and values added in the Custom Fields Manager.  They are located under the Generic Custom Fields>Test Specific item in the Custom Fields tree in the manager.



5.5 Layout

 View > Layout allows you to set your Layout  preferences.  For example you can resize windows and panes (panels) to suit your work style, then at any time:

  1. Save your current layout.
  2. Revert Current Layout - Reverts to your last saved layout.
  3. Revert to the default layout.
By default, TestCaddy is set to automatically save the layout whenever you exit a mode or screen.    

5.6 Advanced Settings

Configuration > Settings. The Settings dialog box lists all settings for the current user, with options to also show settings that apply to other users or to all (useful for the Team Leader to view). All settings are readily edited by the user; or returned to default values. 

Note: It is unusual to need to edit settings in this way. Most settings are set by TestCaddy for you as you use the product.

Some unusual settings can be found in the Settings dialog. For example:

5.7 Export Data

Data from TestCaddy can readily be exported in CSV format. Select any Test Set and, using the right click menu, select Export, and confirm where you would like the data to be filed. Alternatively, select any Folder and use the right click menu to export all TestSets Folder.

The data exported will be in CSV (.csv) format and Excel (.xls) format.  A MHTML version of the Excel format will also be created (.xls.mhtml).   The CSV format is optimized for importing into another tool.  The Excel format is optimized for readability.

Tip: Data exported to CSV will include testing history. However, should you prefer Tests to be exported without history, select the Master Test Library, right click on the top node in the Active Set Panel and choose the Export All Tests in Set option.

5.8 Avoid Repetitive Strain Injury

TestCaddy is designed so that the screen can be navigated and all functions performed using the keyboard. Testers no longer need to perform thousands of mouse clicks!


TestCaddy supports standard Windows HotKeys, which can be activated by pressing the Alt key. Commands with an associated HotKey will have a letter underlined. Pressing the underlined letter has the same effect as clicking that command.

Your keyboard can be used to:

5.9 Common Keyboard Shortcuts

Here is a list of the most common TestCaddy shortcuts and the locations in which they can be used.

Ctrl-C Copies your selection to your clipboard. This can be used to make duplicates of many TestCaddy features.

Ctrl-X Removes your selection and places it on your clipboard. This can be used to move many TestCaddy features.

Ctrl-V Pastes the data from your clipboard to your selected location.

Ctrl-N Creates a new Test Set, Test or Test step.
Ctrl-Del Deletes a Test Set, Test or Test step.
Ctrl-Z Undoes your last change when entering text in a field.
Ctrl-Y Redoes your last change when entering text in a field.
Ctrl-P Passes your selected step and moves to the next step.
Ctrl-F Fails your selected step.
Ctrl-S Saves your current changes.
Tab and Shift-Tab Allow you to move forward and backward through TestCaddy fields.

Note: For a full list of TestCaddy shortcuts refer to Appendix 1 (Full List of TestCaddy Shortcuts)


6 Modules

6.1 Jira Features

6.1.1 Jira Integration 

TestCaddy is designed to integrate closely with Jira, a common bug-tracking tool. TestCaddy has been tested on versions 4.1 and above.

The Jira Integration feature is not enabled by default. To enable Jira Integration ensure that Jira is setup to allow access to the API (Accept remote API calls). Then in TestCaddy click on the File > Jira Configuration Options, and TestCaddy will ask if you wish to enable Jira Integration for all users of the current product database.  Note that each product database has its own Jira Configuration settings, so that different products can use different Jira installations.

Note: TestCaddy uses the Internet Explorer component to access Jira, so you will need to update your Internet Explorer to use Jira. Jira 5 requires Internet Explorer 8 or higher.

6.1.2 Jira Authentication

Users can be required to authenticate with their Jira username and password. Before applying this setting, ensure that your Jira server information and login credentials are correct; you can be locked out of your product database if these are incorrect. To enable this setting you must click on Configuration > Settings. A list of settings will be displayed. Scroll down until you find "Authentication, JIRA". Select this line and click the Edit button. Confirm that you wish to change the value and, after restarting TestCaddy, Jira Authentication will be enabled on login. 

Note: Once Jira Authentication is enabled the Jira URL cannot be edited under the Jira Configuration options.

6.1.3 Jira Tab Options


TestCaddy allows for instant updates and changes to your Jira Tickets to be made through TestCaddy and then viewed by your entire team.

Once you have enabled Jira on your Product database you will see a Jira Tickets tab is available in the main TestCaddy Work Area. This tab contains options that allow you to synchronize your TestCaddy Tests to Jira Tickets.

The main list at the top of the tab shows all Jira Tickets that are linked to your currently selected Test and are within your current filter settings.

Note: In TestCaddy, Jira Tickets are linked to a parent Test. No matter which Test Set you are viewing an Instance of a Test from, you will see all Tickets that have been logged or linked to Instances of that Test. This is so a tester will be aware of any known outstanding issues associated with a Test, no matter which Test Set they run the Test from. This avoids wasting valuable tester time rediscovering known product bugs.

The other buttons and options on this tab are:

Synchronize
The Sync button forces a manual refresh of Ticket information from Jira. If you think that other users may have edited a Jira Ticket then this will allow you to view any changes that have been made.

Open Below
The Open Below button allows you to select a Jira Ticket that is in the list of Tickets linked to the currently selected TestCaddy Test; then, through TestCaddy, open it to be viewed in the window below, without opening a separate Internet browser.

Open In Browser
The Open In Browser button allows you to select a Jira Ticket that is in the list of Tickets linked to the currently selected TestCaddy Test; then open it separately in your own default Internet browser.

Show All Tickets
When the Show All Tickets option is not checked, the list of linked Jira Tickets will be filtered, so that Tickets with a Status value of Closed are not listed. This is so a tester can easily see Tickets that are still open (not fixed, or resolved and ready for testing) and associated with the currently selected Test.

If you wish to see old Tickets that are closed then you can tick the Show All Tickets checkbox.  This will clear the filter.

Note: The filter value is set for each user under Configuration > Settings > Jira, Filter Regexes.


The default "Jira, Filter Regexes", is a regular expression that filters out Tickets with a status of Closed. As an example, you could filter out closed and resolved Tickets with the regular expression value - "(^Closed$)|(^Resolved$)".

Create New
The Create New button allows you to create a new Jira Ticket through TestCaddy. Once you click this button you are able to enter your desired Project, Type of Issue, Ticket Assignee, Ticket Summary and Ticket Details. Once submitted the Ticket will be logged in Jira instantly. It will automatically be linked to the current Test in TestCaddy.

You will also notice that TestCaddy puts a URL reference into the Test Description. If you are browsing a Ticket from within TestCaddy then you can click the link and TestCaddy will locate the Test linked from the Ticket. (Note, however, that this feature does not work when viewing a Jira Ticket from a browser outside TestCaddy.)

Link To Ticket ID
Typing text into the Link To Ticket ID box allows you to link your currently selected Test to a Jira Ticket of your choice. Typing text into this box enables the link button.  Enter the Ticket prefix for the Jira project, a hyphen and then the Ticket number (e.g. TC-123).

Link
The Link button is enabled once text has been typed into the Link To Ticket ID box. It allows you to link your chosen Test to a Jira Ticket.

Unlink
The Unlink button allows you to select a Jira Ticket to which your selected Test has been linked, and to unlink the two. This removes all ties between the Jira Ticket and the selected Test.


7 Dashboard and Reports

7.1 Dashboard features

The Dashboard allows you to display the progress with any Project or Test Set in the form of Graphs, Tables and Reports. By default the Dashboard will display a simple report of progress, showing the status of all tests within the Project or Test Set. (Report name is “Progress”.) You may also use it to display your own progress, showing the status of tests assigned to you alone. (Report name is “My Progress”.)

In addition, the Dashboard can be used to produce new, customized reports or graphs and export these to a file, email or website. This allows you to display the status of a Project to colleagues, clients or customers.

7.1.1 Display Progress on the Dashboard

To display progress as graphs
  1. Select a Project or Test Set on the Projects Panel
  2. Click on the Dashboard tab at the bottom left of the Work Area
By Default, TestCaddy will display two graphs – a bar chart showing the total number of tests in the selected Project or Test Set, with the bar divided into sections showing their Current Status (Passed, Failed, etc) by color; and a pie chart with sectors also showing the Current Status by color.
It is possible to display the description (Filter Description) of the data contained in the graphs, in terms of which folder, or folders, have been selected. Click on the Filter Description button to the right of the graphs to see this information on the Dashboard. The button will now become a 'Filter Description (Hide)' button, and clicking this will remove the information again.
Note that if a Project is selected in the Projects Panel then the bar chart will include a separate bar for each Test Set within that Project. Test Sets that do not yet contain any Tests will not be displayed.

To display progress as a table
  1. Select a Project or Test Set on the Projects Panel
  2. Click on the Dashboard tab at the bottom left of the Work Area. By default the two graphs will display
  3. Click on the Table button to the right of either graph to see the data in table form.
Note that the Table button will now become a 'Table (Hide)' button. Click on this to remove the table from the Dashboard display.

To change the Test Set or Project being displayed
  1. Click on the desired entry in the tree-view on the Projects Panel.
The Dashboard display should update automatically.

To display your own progress for Tests within a Project or Test Set
  1. Select a Project or Test Set on the Projects Panel
  2. Click on the Dashboard tab at the bottom left of the Work Area.
  3. Click on the My Progress tab at the top of the Dashboard.
The Dashboard will then show only those tests that have been assigned to you. To return to a display of all progress, click on the Progress tab.
Note. When moving from one Project or Test Set to another, the Dashboard will remember your last selection of Progress or My Progress and create the appropriate display.

If at any point a Project or Test Set is selected which contains no Tests (or no Tests assigned to you if using the My Progress tab), then the bar graph and pie chart will be replaced by a text box containing the comment “No Data to Display”.

7.1.2 Exporting graphs from the Dashboard


It is possible to export graphs from the Dashboard using the Export button at the bottom right of the Dashboard.
  1. Ensure the graphs you wish to export are visible on the Dashboard
  2. Click on the Export button at the bottom right of the Dashboard
  3. In the dialog box that opens, select a destination you want to export to
  4. Check the name that will be given to your exported graphs and change if necessary
  5. Click the Save button.
A message will indicate whether the export has been successful or not.


7.1.3 Changing the default report on the Dashboard

The first time you display the Dashboard after opening TestCaddy or performing any edit or test run, the Dashboard will display the Progress report as the default report. You can change the default report.
  1. Click on the Dashboard tab at the bottom left of the Work Area
  2. Click on the Configure button at the bottom left of the Dashboard. A list of saved reports will appear in a dialog under the heading Show On Dashboard
  3. Click on one of the reports to select it
  4. Use the Move Up or Move Down buttons to change the order of reports
  5. Click OK to save the order.
The uppermost report in the list will appear as the farthest left tab at the top of the Dashboard. This will be the default report when you first open the Dashboard.

7.1.4 Closing the Dashboard and returning to the standard Work Area display

  1. Click on the Work Area tab at the bottom left of the Work Area
  2. In some circumstances this action may need to be repeated to display the Work Area

7.2 Working with Reports and Graphs

The Dashboard also allows access to other reporting options, where new graphs and reports can be created, edited and exported to other programs. To access these options while viewing the Dashboard
  1. Click on the Reports button at the bottom right of the Dashboard.
TestCaddy Reports will open in a new window, with tabs allowing you to display Graphs and Reports and access a range of options for creating and managing new reports.


7.2.1 Viewing a Report

  1. Click on the dropdown arrow on the Select Reports tab. A menu of saved Reports will drop down.
  2. Click on the appropriate Report to select it. A window will appear showing a filter that can be used to select the Project or Test Set for which you wish to display data.
  3. Select the desired Project or Test Set then click on the Display button at the bottom right of this window.
The Report will display in the same way as you would see it on the Dashboard, and Table and Filter buttons are available to show further information. The Report can be exported by clicking on the Export button at the bottom left of the window. Other option buttons at the bottom left of the window allow the Report to be edited, saved as a new Report, or closed.

7.2.2 Editing a Report

  1. Click on the Select Reports tab, and follow the instructions above to select a Report and the appropriate Project or Test Set (as above).
  2. Click on the Edit button, to open an editing window.
In this window you can use the appropriate drop down or Edit buttons to:
Note that to save the Report you must first Display it by selecting the Display button at the bottom right of the editing window. Once it has been displayed the Save button becomes active or it is possible to return and edit it further before saving.


7.2.3 Creating a New Report

  1. Click on the New Report tab. An editing window will appear.
  2. Select the features you want for your New Report as given above.
  3. Display the Report by clicking on the Display button.
  4. Once you are happy with your Report, click on Save to add this to your saved Reports.

Note that all these functions may be accessed by clicking the Manage Reports button. This will open a window showing a list of saved Reports, and providing buttons to Edit, Delete or Display the selected Report. In addition the 'owner' of a Report may be changed, or a New Report created.

7.2.4 Viewing a Graph

  1. Click on the dropdown arrow on the Select Graphs tab. A menu of saved Graphs will drop down.
  2. Click on the appropriate Graph to select it. A window will appear showing a filter that can be used to select the Project or Test Set for which you wish to display data.
  3. Select the desired Project or Test Set then click on the Display button at the bottom right of this window.
The Graph will display and can now be edited, renamed or saved using the buttons at the bottom left of the window.

7.2.5 Editing a Graph

  1. Select and display the Graph (as above).
  2. Click on the Edit button at the bottom left of the screen to open an Edit Graph window.
In this window you can:
Note that the More Options button also gives access to advanced options where the Graph Set may be changed. Graph Sets are xml files that determine which X and Y options are available for graphs. They can be created by advanced users or can be requested from TestCaddy Support.




8 Documentation

This TestCaddy User Manual is available on a Menu tab in the Work Area (Help > User Manual). There are two options, a local version of the manual for quick access, or a link to the online version for latest updates. As with other TestCaddy tabs you can drag it so that the window floats and is always on top.

The TestCaddy Release notes for your latest build can be found here, or in <InstallationFolder>\TestCaddyManual\ReleaseNotes.html.

The TestCaddy FAQ (Frequently Asked Questions) section of our website contains useful information for every step of TestCaddy usage. If you have any issues with TestCaddy please check these FAQs to see if your problem can be easily resolved before contacting us. The FAQ can be accessed here.



9 Appendices

9.1 Full List of TestCaddy Shortcuts

Shortcut Edit/Run Test mode availability Project Test Sets availability Active Set availability Master Test Library availability
Ctrl-C Copy selected text to Clipboard Copy selected Folder or Test Set N/A Copy selected FA or Test
Ctrl-X Cut selected text to Clipboard Cut selected Folder or Test Set N/A Cut selected FA or Test
Ctrl-V Paste text from Clipboard Paste a copied/cut Folder or Test Set N/A Paste a copied/cut FA or Test
Ctrl-N Creates a new Test Step Creates a new Test Set Creates a new Test Creates a new Test
Ctrl-Del Deletes selected Test Step Deletes selected Folder or Test Set N/A Deletes Selected FA or Test
Ctrl-D Moves selected Test Step down Moves selected Folder or Test Set down N/A Moves selected FA or Test down
Ctrl-U Moves selected Test Step up Moves selected Folder or Test Set up N/A Moves selected FA of Test up
F2 N/A Renames selected Folder or Test Set Renames selected FA or Test Renames selected FA or Test
Esc Cancels changes Cancels rename Cancels rename Cancels rename
Ctrl-S Saves changes N/A N/A N/A
Ctrl-M N/A Assign Test to current user N/A N/A
Ctrl-End N/A N/A Removes selected Test(s) from Test Set N/A
Ctrl-P Sets selected Test Steps as Passed N/A N/A N/A
Ctrl-F Sets selected Test Steps as Failed N/A N/A N/A
Ctrl-E Sets selected Test Steps as Deferred N/A N/A N/A
Ctrl-L Sets selected Test Steps as N/A N/A N/A N/A
Ctrl-G Sets selected Test Steps as Not Run N/A N/A N/A
Ctrl-Alt-P Sets all Test Steps as Passed N/A N/A N/A
Ctrl-Alt-F Sets all Test Steps as Failed N/A N/A N/A
Ctrl-Alt-E Sets all Test Steps as Deferred N/A N/A N/A
Ctrl-Alt-L Sets all Test Steps as N/A N/A N/A N/A
Ctrl-Alt-G Sets all Test Steps as Not Run N/A N/A N/A
Ctrl-Z Undoes latest change (in Editor Field) N/A N/A N/A
Ctrl-Y Redoes latest undo (in Editor Field) N/A N/A N/A
Ctrl-B Sets text to Bold (in Editor Field) N/A N/A N/A
Ctrl-I Sets text to Italic (in Editor Field) N/A Creates an instance of selected Test N/A
Ctrl-U Sets text to Underline (in Editor Field) N/A N/A N/A
Ctrl-A Select all text Select all Folders and Test Sets Select all Functional Areas and Tests Select all Functional Areas and Tests
Ctrl-R N/A Refresh Refresh Refresh
Tab Move forward to next field N/A N/A N/A
Shift-Tab Move backwards to previous field N/A N/A N/A
Ctrl-1 Focus on Projects Focus on Projects Focus on Projects Focus on Projects
Ctrl-2 Focus on Active Set Focus on Active Set Focus on Active Set Focus on Active Set
Ctrl-3



Ctrl-4 Change to layout 1 Change to layout 1 Change to layout 1 Change to layout 1
Ctrl-5 Change to layout 2 Change to layout 2 Change to layout 2 Change to layout 2
Ctrl-8 Enable transparent windows Enable transparent windows Enable transparent windows Enable transparent windows
Ctrl-9 More Transparent More Transparent More Transparent More Transparent
Ctrl-0 Less Transparent Less Transparent Less Transparent Less Transparent

Navigate up Navigate up Navigate up Navigate up
Navigate down Navigate down Navigate down Navigate down
Navigate right Expand node Expand node Expand node

Navigate left Contract node Contract node Contract node