Summary of article: Len DiMaggio, "Orchestrating Integration
Testing", STQE Magazine July/August 2003.
Systems are put together from components supplied by many parties.
They may also be distributed.
The general question: Which variables in local systems can have
which impact globally? Normally, they should have been identified in
the design. But some are not. If the system is composed of many
components, both COTS and specially developed ones, the integration
test just has to proceed and variables identified during the test,
from system logs. Run transactions through the system, check their
results, and have all logging facilities on. The logs will contain
- Lists of places where subsystems expect to find executable
programs or libaries (registry, for example)
- Which database connections were established and closed by
whom and when?
- Which communication ports were assigned for what?
- How were data converted and where?
Change the variables:
Exchange databases with other vendor's ones. Exchange versions.
Exchange network support... See if something bad happens when doing
legal operations of such kind.
- Check failed transactions. How will they be logged,
recovered, and is information to users and support staff good
- Check the logs for startup, shutdown and data processing. Is
everything like it should be? What about cleanup?
Maybe several databases, supplied by several vendors. Not all
support all data types or all in the same way. DATE is especially
- Data may be converted. Trouble in doing so. Follow some
sample data through the system. Check the stored values, no only
the end result.
- XML data parsing: MOST databases support XML data, some maybe
not. Multiple parsers, each requiring different supporting
software, even conflicting, may be in use. Parsers also contain
problems. Send data in XML format through the parsers and check
the result in the databases. Check also performance.
- SQL syntax differences: For example reserved words. ("OID" is
reserved in Oracle).
Inter program dependencies and start order
Some processes on one machine may depend on other processes on other
ones. For example application server on database server. Could this
go wrong? Naive users starting the whole thing up? Initializing
processes taking too long time?
- Is the required start order supportable? New processes coming
in between the others mat have side effects to the ones coming
after. Order may not even be documented.
- Break the chain of dependencies in distributed systems. What
happens if one process or server is down and comes up again? Are
there still informative error messages reported to both users
and logs? Are dead processes restarted by the monitoring
process? Do processes recover from network problems (overload,
outage, transmission errors)? Most important is to check what
happens when database servers are down.
Installation and configuration
Directories, files and environment variable settings (including PC
registry) common to multiple integrated subsystems should not be in
conflict. (One subsystem requiring a differing value from another
for the same variable).
- Communication ports in conflict? Who uses which ports and
when. Some conflicts not easy to find as not all ports may
always be in use. Netstat command and etc/services file (PC and
Unix) good to look into.
- Requirement for different versions of supporting software
- Is configuration data consistent over the systems? Read the
- Differing operating environment. A program may require
certain levels of user privileges in conflict with other
programs. What user accounts are created during installation and
are their privileges correct?
- Is the default configuration / installation supportable?
Choosing default installation by users may result in inefficient
or insecure installation. Check in BugNet, BugTraq and user