In our latest Halloween themed Tech Talk, Emily O’Connor, Principal Test Engineer, recently shared her tips for avoiding some nightmare scenarios in development teams.
Emily described the three types of projects Audacia typically works on; technology consultancy, project delivery or team augmentation - suggesting that the best practices can be applied to each context.
Using “trick” and “treat” moments to frame each challenge, Emily’s Tech Talk unveils the tricks that lurk in the shadows and perils of development projects and reveals the sweet treats and best practices that can be used to navigate these situations, to avoid testing tales of terror.
The Cursed Card: “That’s Not Testing My Card” 🃏
Trick:
These bugs may be found during exploratory testing, while writing data-setup for automation or while following the end-to-end user journey, which developers have recently contributed to. This phrase echoes in teams where developers can dismiss bugs as “not related” to the feature they recently implemented.
This mindset can be a side-effect of the invisible boundaries between tasks, overlooking system-wide quality concerns. These issues may be ignored, or rarely make it into the backlog or roadmap, and often turn into lingering system defects increasing friction between developers and testers.
Treat:
Before code is written, holding planning or “three amigos” sessions will explore the problem statement and set clear expectations of what will or won’t be tested in an individual feature card and in the overall user journey, ensuring any bugs can be raised without blame and picked up closer to the time of implementation. This team alignment also encourages quality as a whole team responsibility.
Additionally, writing test cases early and collaboratively ensures testing is performed at the appropriate “level”, i.e. in unit tests, API tests or end-to-end UI tests.
The Phantom Responsibility Shift: “I’ll Ask Your Test Manager” 👻
Trick:
Escalating issues to a test manager instead of discussing them with team members can lead to a “phantom shift” in responsibility. This horror story involves bypassing the team’s expert domain knowledge, as decisions are deferred to those who, although being an escalation point, may be less familiar with the day-to-day intricacies of the project. This can slow down problem-solving and diminish the importance of each team member’s role in quality assurance.
Treat:
A flatter structure can empower teams to own their work, sharing insights and knowledge as if passing out Halloween candy. Implementing matrix management can prove beneficial, where test leads oversee specific areas and bridge gaps between teams. This approach fosters a culture where project teams work as a cohesive unit, valuing each member’s input to address issues promptly.
The Graveyard Test Environment: “Where Test Data Rots” 💀
Trick:
An unmanaged test environment can become a graveyard where old test data piles up, causing performance to slow and relevance to dwindle. Outdated data can lead to unrepresentative test scenarios, which then impact debugging, especially during user acceptance testing (UAT) or when reproducing bugs.
Treat:
Avoid a graveyard environment by setting up automated scripts to regularly reset and cleanup data, much like clearing out last year’s Halloween decorations. This could be extended to create a standardised process to not only clear stale data, but populate the test environment with fresh, relevant data, representative of the ratio of data types found in live – you might have one ghost per room in each haunted house. As an ultimate step, using version control for test environment configurations can provide easy tracking, reversion, and consistency across testing phases.
Assigning a tester to monitor test environment performance can prevent data from rotting over time. This ensures testers are regularly reviewing the quality of test environments and would be likely to spot any data created by automated tests which is not removed in after steps.
The Shapeshifter: “That’s Not Even a Bug” 🔵
Trick:
Some issues may initially seem minor, like usability or accessibility concerns, and in our haunted house of software development, get dismissed as “not real bugs”. The nightmare is these bugs, which normally tend to be within the realm of non-functional requirements, can be easy to ignore, yet they’re crucial to the user experience and can degrade product quality if left unchecked.
Treat:
To combat this, document non-functional issues meticulously. Screenshots, detailed steps to reproduce, and clear communication with product owners and developers can help ensure these issues are taken seriously. Clarifying the difference between a bug and a feature request in documentation also helps streamline decision-making. Encouraging open dialogues fosters empathy and collaboration between testers and developers, creating a stronger, more cohesive team.
The Phantom Machine: “It Works on My Machine” 🕸
Trick:
This classic phrase signals potential chaos. When a feature only works on a developer’s machine but fails in other environments, it can indicate inconsistencies in setup or configuration that haunt production. Developers may defend their code, leading to frustration on all sides and, often, delayed releases. This phrase is a very commonly cited nightmare scenario for testers, who can’t very well ship the developers machine! Test Engineers may find it most triggering as it implies an “over the fence” attitude to quality, as the developer has not checked their work following the code review and release process.
Treat:
Emily suggests taming this phantom with infrastructure as code that ensures consistent environments. CI/CD pipelines are invaluable here, helping reduce the likelihood of environment-specific issues before they impact production by creating repeatable deployments that can be rolled back if necessary. Documenting environment config allows teams to spot differences and manage setup requirements, while chaos engineering can reveal lurking environment-specific issues before they become nightmares in production.
The Werewolf Developer: “Developers Can’t Test” 🐺
Trick:
This myth suggests developers “can’t” or “shouldn’t” test, which prevents a collaborative and shared responsibility for quality. This might have been created from the safety net testers are able to provide, when they take full responsibility for testing all features. Then, when testers are spread thin, they can often lack the bandwidth to address every potential issue, making it critical for developers to also maintain quality.
Treat:
Emily emphasises the importance of and collaborative testing activities. Because developers are intimately familiar with the features they have implemented – they can be taught to spot negative tests for “else conditions” or integration failures.
Implementing test-driven development (TDD) also brings quality considerations into the coding process, allowing developers to address edge cases alongside testers. Pair testing sessions between developers and testers can speed up the development process, removing the need to rely on a limited number of testers, and acknowledging developers who find bugs helps establish a culture of shared responsibility. Finally, closer working relationships often give all parties a greater appreciation for each other’s role.
The Time Vampire: “We Don’t Have Time for Testing” 🦇
Trick:
Skipping tests to meet deadlines often means seemingly minor issues transform into major defects post-release, risking things like system stability, and customer satisfaction. This school of thought is enough to drain the life-force from any tester.
Treat:
Fending the phrase “we don’t have time for testing” is a challenging task, which could be addressed through the following best practices:
- Equip as many engineers as possible with rosaries, showcasing the benefits of testing through-out the software development lifecycle.
- Set-up mirrors in every feature, reflecting back questions like “is this fast enough?” and “is this secure enough?” to encourage discussions on non-functional quality requirements. Questioning the system speed under load and introducing security requirements will ensure time is given to testing, by showcasing issues in a different light.
- Sprinkle mustard seeds into the development process by integrating testing into the routine project workflow, rather than requesting a dedicated period of time for regression testing. This might extend to a project’s definition of done, by including key checks on critical or high risk scenarios.
- Introduce automated tests (garlic) to accelerate repetitive checks (and deter the vampires).
The Quality Crypt: “There’s No Automated Tests” ⛪
Trick:
Inheriting a system with no automated tests is like wandering a crypt without a flame. Manually reviewing and testing every feature and bug becomes a time sink, relying solely on human memory and consistency, which is prone to error. Without a safety net to prevent regression bugs, the pressure to check everything within release deadlines gets too much before negatives spill out.
Treat:
Adopt an automation-first approach, writing tests for high-impact / high-risk areas as you make updates. For legacy systems, start with unit and API tests in sections undergoing frequent changes, then build test coverage gradually. By prioritising automation for high-risk or frequently used areas, the team can reduce manual testing and prevent bugs from haunting production.
Conclusion
These tricks and treats offer a few protection charms to avoid common development pitfalls and can help foster a collaborative, resilient team. Quality, communication, and continuous improvement are the best defenses against development team horror stories.