It's been too long and I don't want to bore you with stories about diapers and work, even though they are plentiful. Well, something about work. This is a testing blog and I'm a tester after all.
Two things ignited my resolve to write something for your pleasure, dear reader. Firstly, Eeva Pursula gave me an inspiration with her blog. Secondly, I read Simon Knight's Linkedin article about challenging interview questions for people in testing and QA and felt like saying something more than what would fit into a comment field. Don't get me wrong: There was nothing wrong in Simon's article, but one of the replies caught my attention...
"Nice list. You could also add. Do you find more problems with test cases or exploratory testing? Why? From my experience test design covers the requirements but exploratory testing at the edge of requirements finds more interesting issues. I look at test design as teaching the tester the application. When that is mastered ideas for exploratory testing are discovered. Since requirements can never be complete test cases also have limitations based on the knowledge of the tester during design."
This is totally ok, but make sure that you know who sits across the table. If it would be me, you should be prepared for these counterquestions:
- Why would you count the problems? How about the value and meaningfullness of the findings that may or may not be problems?
- What assumptions are the requirements created against? Do they involve only someone's explicit needs or could there be more? Needs that cannot be depicted? Or perhaps wants that this someone is unaware of, but matter nontheless? Who is this someone? Does (s)he matter?
- What is "an edge of requirement"?
- What are interesting issues? Problems or otherwise valuable findings? Perhaps something that maps the true potential of the test object?
- If the test design teaches the tester the application, on what does it focus? On functionalities? Usability? Security? Performance? Some other quality dimensions? How would you evaluate learnability of the application if the test design "teaches" you that? Are you sure the test design teaches the right things?
- Are you sure test design gives you ideas for exploration or vice versa?
- Do you need requirements to start testing? Are you sure? Could you have any other means to interpret and ultimately judge the results testing gives you? Are you sure?
My current role at TeliaSonera requires that I interview people who do testing. Mainly consultants who offer their services to us, but also developers and other who participate in qualitative evaluation of our company's products and services. I also meet a lot of people who know that I'm quite invested in exploratory testing. They try to dazzle me with slides about the subject and cumbersome rants on how exploration should be considered as some complementary activity for what I like to call assumption-driven testing. They only make things worse buy doing that. Just saying.
Make no mistake: Exploration is the key to all testing. Testing is there to find valuable information for people who make decisions. And you cannot find if you don't explore. Designing test sets, building automation, writing requirements, whatnot. They all require exploration.
All I'm asking that you think before asking the applicant questions such as "Do you find more problems with test cases or exploratory testing? Why?" Interview process tests not only the applicant but the interviewer as well. You should think your questions through, because they portray your competence in the eyes of a possible talent, or better yet, someone who will relay their impressions on your company to others. The interview can be the first actual point of contact to your company and its tangible reputation. So play it well on the both sides of the table.
Do you agree? Disagree? Please comment below.
PS: Ah, it's good to be back. :)
PSS: As I finished writing this, I googled "So you think you can test" and found Huib Schoot's blog post about this. Darn it, I'm not going to delete mine. :)