Efficient testing strategy/methodology

I'm currently at a company that has 3 developers per platform for our product (iOS, Android, and a proprietary hardware device). I am on the iOS team, and this is my first time at a company of this size working with a team of developers. Over the past few months I have noticed a tremendous lack of efficiency when it comes to QA & testing, and I'm curious to hear other people's take on this. (Our QA team at this time consists of the manager and one QA tester.)

Let's say a bug is found during testing, and it's a pretty minor one that requires minimal code change that is isolated to one part of the app. Our QA manager immediately triggers a full set of regression testing that typically takes at least 3 days. For me, as a developer, it's beyond frustrating to see an isolated piece of code that affects only one screen of the app trigger tests that include areas of the app that are not affected at all. This can't be normal, can it? Has anyone else encountered this kind of testing process?

Don't get me wrong, I understand that QA needs to maintain a sort of separation from developers and can't just rely on us assuring them that "it'll be fine, trust me". But wouldn't evaluating the code change and areas affected by it be a prudent thing to do in order to be more efficient with testing? Often I can just comment out the code that was changed and then proceed to comment out affected areas until the app compiles again, and then you can see what areas are affected and need to be tested. It would then follow that this is a solvable problem, so I wonder if there are any tools out there that would do this?

Ultimately, this testing "strategy" slows down development greatly and results in a worse product because we as developers can't get in as many bug fixes as we would want, especially when getting close to a release. It's a tricky thing because I don't want to come off as telling the QA manager how to do her job, but I just feel there has to be a better way.

Edited by Flyingsand on Reason: Initial post
Hello Flyingsand,

It happened several times to me that a change I made was, according to me, with my current knowledge of a project, only affecting some part of a software, when it actually introduced a regression in some part of the project I didn't know about, but that's probably my fault.

Another thing is that QA tester time is usually cheaper than developer time, so taking the time to tell QA what needs to be tested for each bugfix would probably end up more expensive than just running the full set of tests.

The only things that seems weird to in what you say is that this process slows down development; what prevents you from working while a test process is on its way? The next test process will be about testing the new batch of bug fixes, I don't see how this affects development at all. I have never been in a position where I had to stop working while a test was being made, except maybe when very close to release and we try to fix one last thing.
One option is to start optimizing the tests so they they don't take 3 days. (or throw more hardware at the problem)

Another way to speed up the iteration cycle is to batch up several (minor) changes and test those together in a single test run, if it fails then you test just the failed tests against each change to isolate where the fault happened. Testing each individual changes against the full suite is a waste of time.

Integration tests are important and you cannot rely on human to know what will be affected by each change.
Guntha

The only things that seems weird to in what you say is that this process slows down development; what prevents you from working while a test process is on its way? The next test process will be about testing the new batch of bug fixes, I don't see how this affects development at all. I have never been in a position where I had to stop working while a test was being made, except maybe when very close to release and we try to fix one last thing.

What I meant is that it slows down development in general, not necessarily any individual developer's time. In other words, we end up with a bigger backlog of bugs since we can't get in minor fixes after a certain point. So maybe "slows down development" wasn't the right wording. Maybe.. "slows down progress" is better?

ratchetfreak

One option is to start optimizing the tests so they they don't take 3 days. (or throw more hardware at the problem)

Another way to speed up the iteration cycle is to batch up several (minor) changes and test those together in a single test run, if it fails then you test just the failed tests against each change to isolate where the fault happened. Testing each individual changes against the full suite is a waste of time.

Integration tests are important and you cannot rely on human to know what will be affected by each change.

Just to give a little more context, we're at the end of a testing cycle right now and management wants to release soon. A bug was discovered during this testing cycle that was overlooked, and I came up with a clever solution for it that required very minimal code to fix and is isolated to one small part of the app. However, the fix is not being approved since it will trigger a round of regression testing that will take at least 3 days. I already coded the fix (in a separate branch of course), and I'm just trying to find a way to convince QA that it just requires a quick test of this part of the app, which should take.. a couple of hours max. Not 3 days!

Is this the way QA really works? There's no consideration of the scope of the code change? Just.. "nope, have to re-test all this stuff"?

I will try and see if adding unit/integration tests for this fix will convince QA, but since it's part of a new feature, it might not work.
Everything in this thread seems utterly insane to me. Why do your regression tests take 3 entire days to run!? In what universe is developer time so much more expensive than QA time that a few minutes of developer time could outweight 3 days of QA time!?!? The way you describe the process, it sounds like QA is *preventing* you from fixing bugs in a timely manner, which is just 100% backwards. If what you say is accurate, it seems like multiple things must have gone very badly wrong already in the organization of your company's development process for things to get to this point.
notnullnotvoid
Everything in this thread seems utterly insane to me. Why do your regression tests take 3 entire days to run!? In what universe is developer time so much more expensive than QA time that a few minutes of developer time could outweight 3 days of QA time!?!? The way you describe the process, it sounds like QA is *preventing* you from fixing bugs in a timely manner, which is just 100% backwards. If what you say is accurate, it seems like multiple things must have gone very badly wrong already in the organization of your company's development process for things to get to this point.


It is, isn't it. Glad I'm not alone in thinking so. :)

The regression tests take so long to get through because there are just under 200 individual test cases that need to be manually tested on each of the testing devices (1 iPhone, 1 iPad, 1 simulator). We've written a pretty decent set of unit & integration tests, but our QA manager hasn't vetted them so she won't accept them as validation.

The company I'm at is relatively young, so there isn't a history of bad development or anything, but our QA manager is a pretty recent hire (about 6 months ago) and we have never had a QA manager before (only a QA team that reported to the Product Director). I expect that the QA manager has a baggage of bad experience that she thinks that any code added or modified is going to cause ripple effects and unearth more bugs than it fixes.. or something..

And it's not like we don't do code reviews either. Everything we do gets seen by at least 2 other developers before it can be merged into the dev branch. It's just strange for me because I've been programming for quite awhile, but I've mostly been at smaller agencies or done freelance/contract work. This is my first time at a larger company, and I'm just baffled at the QA process. It wasn't even like this when I started, just since the QA manager got hired and started "formalizing" everything.

Edited by Flyingsand on
Flyingsand

The regression tests take so long to get through because there are just under 200 individual test cases that need to be manually tested on each of the testing devices (1 iPhone, 1 iPad, 1 simulator).


So you are testing all changes for all platforms everytime? Wouldn't it be better to define a lead platform to do fast testing and fixing on? Once all tests are successful on the lead platform, other platforms are tested.

Edited by LaresYamoir on
Flyingsand
notnullnotvoid
Everything in this thread seems utterly insane to me. Why do your regression tests take 3 entire days to run!? In what universe is developer time so much more expensive than QA time that a few minutes of developer time could outweight 3 days of QA time!?!? The way you describe the process, it sounds like QA is *preventing* you from fixing bugs in a timely manner, which is just 100% backwards. If what you say is accurate, it seems like multiple things must have gone very badly wrong already in the organization of your company's development process for things to get to this point.


It is, isn't it. Glad I'm not alone in thinking so. :)

The regression tests take so long to get through because there are just under 200 individual test cases that need to be manually tested on each of the testing devices (1 iPhone, 1 iPad, 1 simulator). We've written a pretty decent set of unit & integration tests, but our QA manager hasn't vetted them so she won't accept them as validation.

The company I'm at is relatively young, so there isn't a history of bad development or anything, but our QA manager is a pretty recent hire (about 6 months ago) and we have never had a QA manager before (only a QA team that reported to the Product Director). I expect that the QA manager has a baggage of bad experience that she thinks that any code added or modified is going to cause ripple effects and unearth more bugs than it fixes.. or something..

And it's not like we don't do code reviews either. Everything we do gets seen by at least 2 other developers before it can be merged into the dev branch. It's just strange for me because I've been programming for quite awhile, but I've mostly been at smaller agencies or done freelance/contract work. This is my first time at a larger company, and I'm just baffled at the QA process. It wasn't even like this when I started, just since the QA manager got hired and started "formalizing" everything.


Like you said why would they take your word for it that its just this part of the app that is affected by your "clever" fix.
Many a times that has been said and many a millions of dollars have been lost.
Just let QA do their job and you do yours.
If you want to take responsibility and give up your whole salary if something goes wrong than sure we will take your word for it.
godratio

Like you said why would they take your word for it that its just this part of the app that is affected by your "clever" fix.
Many a times that has been said and many a millions of dollars have been lost.
Just let QA do their job and you do yours.
If you want to take responsibility and give up your whole salary if something goes wrong than sure we will take your word for it.


In my first post on this topic I explicitly say that I understand (and don't want them to just "trust me"):

Don't get me wrong, I understand that QA needs to maintain a sort of separation from developers and can't just rely on us assuring them that "it'll be fine, trust me".


I'm trying to find a way to show QA by way of proof that a code change only affects a single small part of the app. Ideally this could be achieved through some sort of code review process where several developers (including at least one senior developer) can back that up. It would be cool as well if there was a tool where you could say, "This function changed, now show me all the areas affected by this change". i.e. all the code that depends on that function either by calling it or it calling other functions, or modifying any global or shared state.

Furthermore, let me be clear that I highly respect and value QA and their role in software development. It's just that I'm seeing massive inefficiencies in the process since our QA manager started, and I'm trying to find a way to remedy this. I mean, can you imagine if this is the way it's done in games? No game would ever get done.
...Actually, that's the way it was done on every game I professionally worked on. Except for the progress slow-down part, which I still don't understand.
Flyingsand
if there was a tool where you could say, "This function changed, now show me all the areas affected by this change". i.e. all the code that depends on that function either by calling it or it calling other functions, or modifying any global or shared state.


I'm pretty sure such tool does not exist and will not exist. There are so many different hidden ways to modify global/shared state directly or indirectly through OS that either you'll be fighting with countless false positives - one "malloc" (or similar functionality) modifies every other place in program due to global state change. Or it won't find almost anything at all except direct compilation errors which are pretty useless for integration testing.

Edited by Mārtiņš Možeiko on
Guntha
...Actually, that's the way it was done on every game I professionally worked on. Except for the progress slow-down part, which I still don't understand.


So if there was, say, a minor bug in the sorting of items in a player's inventory list, that change alone would require a regression test that included the physics system, input/player control, animations, AI, rendering, audio, networking, etc.? That would be the analogy to what I'm describing.

mmozeiko
I'm pretty sure such tool does not exist and will not exist. There are so many different hidden ways to modify global/shared state directly or indirectly through OS that either you'll be fighting with countless false positives - one "malloc" (or similar functionality) modifies every other place in program due to global state change. Or it won't find almost anything at all except direct compilation errors which are pretty useless for integration testing.


Well, yeah if you bring in the global state of the OS, I can see that being problematic. I think for some cases it's a solvable problem, but in the general case you're probably right -- it couldn't be done.