Software Testing In A Virtualized WorldSoftware Testing In A Virtualized World
Google's James Whittaker on how cloud computing and virtualization change the approach to testing.
James Whittaker is director of test engineering for Google, and author of How To Break Software: A Practical Guide To Testing. He recently spoke with Dr. Dobb's editor in chief Jonathan Erickson.
Dr. Dobb's: Do virtualization and cloud computing pose unique testing challenges?
Whittaker: Opportunities more than challenges. At Google, if I want to test, say, Chrome, I visit a Web site, tell it how many machines I want and what operating systems, drivers, apps, and the version of Chrome that I want on them, and wham! those machines are provisioned, and I can point my test automation at them. I don't care where they are. I don't care what they are. They exist, and they act just like the test environment that I would otherwise have to painstakingly--and expensively--create.
Dr. Dobb's: What about multicore platforms and parallel programming?
Whittaker: Multicore behaves the same as single core from an external point of view. Same with parallel and serial. The difference is with unit-level and other code-based tests. The devil is in these low-level details, and tools haven't yet caught up.
Dr. Dobb's: Functional testing, unit testing, security testing, and more. What's next?
Whittaker: Accessibility--hands down. The idea that we can abstract the input mechanics from the functionality of the app. The idea that anything the application is capable of doing can be invoked programmatically. For users with disabilities, this is crucial as it allows for a great deal of creativity in how the program is manipulated. For testers, this means the ultimate set of test hooks. With accessible code, I can write hooks that can literally drive it through its entire set of capabilities. Nothing needs to be left to chance anymore.
Dr. Dobb's: How close are we to "real" automated testing?
Whittaker: Whatever buttons users can press, whatever values they can enter, we can see and do with automation. But applying inputs is only the first part of manual testing. Human testers can see subtle variations that lead them to say "that's a bug." This is the primary limiting aspect of automation. Programs aren't good at seeing output and processing behavior. They can see crashes. But they can't notice major bugs like navigating to the wrong page or rendering an image incorrectly.
About the Author
You May Also Like