Test Environments

Latest Comments

No comments to show.
Environment Configuration-Mock-Production-Test Data-Test Environment-Test Management-Testware

Environments are logical entities required for executing a test object; typically, a configuration of cloud resources that run a web application or some other software artefact or service. What does a base structure of test environments look like? What’s the relation between environment configuration and testing? In what follows, we’ll cover these two questions in turn.

Base Structure

A default or base structure of environments includes entities called Development (DEV), Integration (INT), User Acceptance Test (UAT), and Production (PROD). These are the canonical names, depending on the organisational context functionally similar entities might be called slightly differently. Typically, artefact version deployed to these environments differ, but that need not be the case. For example, DEV and INT might be kept in sync. The software version deployed to DEV is the newest one, while on PROD, of all versions deployed, the oldest one is running. If performance or stress tests are required, they are usually executed in a dedicated environment in order to be able to get meaningful results. 

There are a variety of characteristics that define environments, the three most important ones of which include: purpose, context, and data.

Purpose

The purpose of PROD is to make functionality available to customers. The purpose of non-productive environments is to allow for testing. Note, though, that each non-productive environment serves distinct purposes according to its associated test levels. These purposes are derivable from the goals associated with test levels. For example, user acceptance testing does not aim at finding bugs and thus implies different requirements compared to other testing environments.

Context

Another major factor is the amount of real context systems. Usually, this factor varies from ‘Everything mocked away’ (DEV) to ‘Some things mocked away’ (INT) to ‘Everything real’ (PROD). UAT is a special case that can be either closer to INT or PROD, depending on organisational factors. Contextual variance is the primary reason for behavioural differences between environments. Typically, it is ‘easier’ to test on DEV and, therefore, often the preferred choice. I’ll come back to this below. 

Data

Test data varies between environments but usually less strict than context. It is considered best practice these days to work with production data in production only. Oftentimes, compliance rules demand this. It may happen, though, that production data is used in UAT—either because it’s faster and simpler, or because stakeholders demand ‘realistic’ behaviour, which they think is only guaranteed by production data. 

Test data in testing environments is either completely synthetical (e.g., in DEV) or anonymised production data or production-like synthetical data (e.g., in INT). While most behavioural differences between environments stem from contextual factors, test data may also sometimes influence behaviour. Examples for this include filled vs. empty fields; differences in default values; uniformity of data (e.g. length of strings); structure of test data (e.g. semantics of substrings); variance of data; underspecified interface contracts, among other things. 

Context/Purpose Dependency

Context and purpose are loosely connected. There might be distinct purposes where the context is the same. For example, user acceptance testing and security testing may happen in different environments (e.g., for pragmatic reasons when you want to execute tests in parallel, or because you want to make sure that the state of UAT remains untouched until next iteration) even though the surrounding systems are the same (as that might be a prerequisite of valid results in both cases). But then there are situations in which distinct purposes imply distinct contexts.

DEV vs. INT is a typical example of this kind. The level of ‘mocked away’ ranges from ‘everything’ to ‘almost everything’ in the former case and from ‘almost nothing’ to ‘nothing’ in the latter case. (Of course, there might be exceptions; but this is the rule of thumb.) So, INT is usually harder to create, harder to maintain, and—all else equal—more volatile. But you create it nevertheless since it allows you to execute test levels that you wouldn’t be able to execute otherwise.  

Environments and Testing

Generally, the connection between environment configuration and testing is straightforward. Testing levels are associated with specific testing goals and test types. Environments serve the purpose of providing a logical space where these test types may be executed properly. This typically indirectly implies that testers are forced to test ‘as far to the left as necessary’. When I worked in tester roles and, in a given project, there was more than one environment, I usually tried to test ‘as far to the right as possible’.

Testing on the Right-Hand Side

Why that? That is because of the value of the results and their scope. Successfully testing a feature in a more integrated environment logically implies successful testing of this feature in a less integrated environment. Test data quality is higher and typically more varied; context systems are real and thus more demanding. Reasons for being forced to test on DEV instead of INT, if both exist, include: INT is unavailable; versions differ and feature is unavailable on INT; access to INT is restricted. In case a feature does not work on INT, I could still doublecheck against DEV whether it was due to failed integration or due to a bug in the relevant feature. 

Sometimes, even pragmatic reasons may play a role. I once was confronted with DEV and INT that differed in terms of context systems and test data but had otherwise no restrictions. Both were constantly available, accessible, and were constantly kept in sync as regards deployed versions, with deployments happening several times a day. In terms of relevant environment properties, both environments were equally ‘good’.

Comfort Attributes of Environments

There was another aspect that made it even more convenient to test against INT. Very often, I had to manipulate test data during execution or prepare large sets of test data before execution. In DEV, that was quite cumbersome as I had to talk to an API and manipulate huge requests. Also, I had to manually check the specification for plausibility checks, boundary values, etc. And I had to look up the mappings of relevant test data values to technical codes in still another document every time that I created a new request. In INT, there was simply a frontend for all of this—no need to worry about mappings, plausibility checks, default values, etc.

There is one exception to the general rule of testing ‘as far to the right as possible’, namely production. Testing in production is considered harmful and should be avoided by all means.

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *