• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Monday, 14 Apr 2014 15:25
by Erik Kuefler

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

After writing a method, it's easy to write just one test that verifies everything the method does. But it can be harmful to think that tests and public methods should have a 1:1 relationship. What we really want to test are behaviors, where a single method can exhibit many behaviors, and a single behavior sometimes spans across multiple methods.

Let's take a look at a bad test that verifies an entire method:

@Test public void testProcessTransaction() {
User user = newUserWithBalance(LOW_BALANCE_THRESHOLD.plus(dollars(2));
transactionProcessor.processTransaction(
user,
new Transaction("Pile of Beanie Babies", dollars(3)));
assertContains("You bought a Pile of Beanie Babies", ui.getText());
assertEquals(1, user.getEmails().size());
assertEquals("Your balance is low", user.getEmails().get(0).getSubject());
}

Displaying the name of the purchased item and sending an email about the balance being low are two separate behaviors, but this test looks at both of those behaviors together just because they happen to be triggered by the same method. Tests like this very often become massive and difficult to maintain over time as additional behaviors keep getting added in—eventually it will be very hard to tell which parts of the input are responsible for which assertions. The fact that the test's name is a direct mirror of the method's name is a bad sign.

It's a much better idea to use separate tests to verify separate behaviors:

@Test public void testProcessTransaction_displaysNotification() {
transactionProcessor.processTransaction(
new User(), new Transaction("Pile of Beanie Babies"));
assertContains("You bought a Pile of Beanie Babies", ui.getText());
}
@Test public void testProcessTransaction_sendsEmailWhenBalanceIsLow() {
User user = newUserWithBalance(LOW_BALANCE_THRESHOLD.plus(dollars(2));
transactionProcessor.processTransaction(
user,
new Transaction(dollars(3)));
assertEquals(1, user.getEmails().size());
assertEquals("Your balance is low", user.getEmails().get(0).getSubject());
}

Now, when someone adds a new behavior, they will write a new test for that behavior. Each test will remain focused and easy to understand, no matter how many behaviors are added. This will make your tests more resilient since adding new behaviors is unlikely to break the existing tests, and clearer since each test contains code to exercise only one behavior.

Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Erik Kuefler, TotT"
Send by mail Print  Save  Delicious 
Date: Wednesday, 02 Apr 2014 09:49

Update: As noticed in the comments, the date of the post was not a mere coincidence :)


by Kaue Silveira

Here at Google, we invest heavily in development productivity research. In fact, our TDD research group now occupies nearly an entire building of the Googleplex. The group has been working hard to minimize the development cycle time, and we’d like to share some of the amazing progress they’ve made.

The Concept

In the ways of old, it used to be that people wrote tests for their existing code. This was changed by TDD (Test-driven Development), where one would write the test first and then write the code to satisfy it. The TDD research group didn’t think this was enough and wanted to elevate the humble test to the next level. We are pleased to announce the Real TDD, our latest innovation in the Program Synthesis field, where you write only the tests and have the computer write the code for you!

The following graph shows how the number of tests created by a small feature team grew since they started using this tool towards the end of 2013. Over the last 2 quarters, more than 89% of this team’s production code was written by the tool!

See it in action:

Test written by a Software Engineer:

class LinkGeneratorTest(googletest.TestCase):

def setUp(self):
self.generator = link_generator.LinkGenerator()

def testGetLinkFromIDs(self):
expected = ('https://frontend.google.com/advancedSearchResults?'
's.op=ALL&s.r0.field=ID&s.r0.val=1288585+1310696+1346270+')
actual = self.generator.GetLinkFromIDs(set((1346270, 1310696, 1288585)))
self.assertEqual(expected, actual)

Code created by our tool:

import urllib

class LinkGenerator(object):

_URL = (
'https://frontend.google.com/advancedSearchResults?'
's.op=ALL&s.r0.field=ID&s.r0.val=')

def GetLinkFromIDs(self, ids):
result = []
for id in sorted(ids):
result.append('%s ' % id)
return self._URL + urllib.quote_plus(''.join(result))

Note that the tool is smart enough to not generate the obvious implementation of returning a constant string, but instead it correctly abstracts and generalizes the relation between inputs and outputs. It becomes smarter at every use and it’s behaving more and more like a human programmer every day. We once saw a comment in the generated code that said "I need some coffee".

How does it work?

We’ve trained the Google Brain with billions of lines of open-source software to learn about coding patterns and how product code correlates with test code. Its accuracy is further improved by using Type Inference to infer types from code and the Girard-Reynolds Isomorphism to infer code from types.

The tool runs every time your unit test is saved, and it uses the learned model to guide a backtracking search for a code snippet that satisfies all assertions in the test. It provides sub-second responses for 99.5% of the cases (as shown in the following graph), thanks to millions of pre-computed assertion-snippet pairs stored in Spanner for global low-latency access.



How can I use it?

We will offer a free (rate-limited) service that everyone can use, once we have sorted out the legal issues regarding the possibility of mixing code snippets originating from open-source projects with different licenses (e.g., GPL-licensed tests will simply refuse to pass BSD-licensed code snippets). If you would like to try our alpha release before the public launch, leave us a comment!

Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Kaue Silveira"
Send by mail Print  Save  Delicious 
Date: Thursday, 27 Mar 2014 14:41
by Anthony Vallone

We have two excellent, new videos to share about testing at Google. If you are curious about the work that our Test Engineers (TEs) and Software Engineers in Test (SETs) do, you’ll find both of these videos very interesting.

The Life at Google team produced a video series called Do Cool Things That Matter. This series includes a video from an SET and TE on the Maps team (Sean Jordan and Yvette Nameth) discussing their work on the Google Maps team.

Meet Yvette and Sean from the Google Maps Test Team



The Google Students team hosted a Hangouts On Air event with several Google SETs (Diego Salas, Karin Lundberg, Jonathan Velasquez, Chaitali Narla, and Dave Chen) discussing the SET role.

Software Engineers in Test at Google - Covering your (Code)Bases



Interested in joining the ranks of TEs or SETs at Google? Search for Google test jobs.

Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Anthony Vallone, Chaitali Narla, Dave Ch..."
Send by mail Print  Save  Delicious 
Date: Thursday, 27 Mar 2014 14:41
by Anthony Vallone

How long does it take to find the root cause of a failure in your system? Five minutes? Five days? If you answered close to five minutes, it’s very likely that your production system and tests have great logging. All too often, seemingly unessential features like logging, exception handling, and (dare I say it) testing are an implementation afterthought. Like exception handling and testing, you really need to have a strategy for logging in both your systems and your tests. Never underestimate the power of logging. With optimal logging, you can even eliminate the necessity for debuggers. Below are some guidelines that have been useful to me over the years.


Channeling Goldilocks

Never log too much. Massive, disk-quota burning logs are a clear indicator that little thought was put in to logging. If you log too much, you’ll need to devise complex approaches to minimize disk access, maintain log history, archive large quantities of data, and query these large sets of data. More importantly, you’ll make it very difficult to find valuable information in all the chatter.

The only thing worse than logging too much is logging too little. There are normally two main goals of logging: help with bug investigation and event confirmation. If your log can’t explain the cause of a bug or whether a certain transaction took place, you are logging too little.

Good things to log:
  • Important startup configuration
  • Errors
  • Warnings
  • Changes to persistent data
  • Requests and responses between major system components
  • Significant state changes
  • User interactions
  • Calls with a known risk of failure
  • Waits on conditions that could take measurable time to satisfy
  • Periodic progress during long-running tasks
  • Significant branch points of logic and conditions that led to the branch
  • Summaries of processing steps or events from high level functions - Avoid logging every step of a complex process in low-level functions.

Bad things to log:
  • Function entry - Don’t log a function entry unless it is significant or logged at the debug level.
  • Data within a loop - Avoid logging from many iterations of a loop. It is OK to log from iterations of small loops or to log periodically from large loops.
  • Content of large messages or files - Truncate or summarize the data in some way that will be useful to debugging.
  • Benign errors - Errors that are not really errors can confuse the log reader. This sometimes happens when exception handling is part of successful execution flow.
  • Repetitive errors - Do not repetitively log the same or similar error. This can quickly fill a log and hide the actual cause. Frequency of error types is best handled by monitoring. Logs only need to capture detail for some of those errors.


There is More Than One Level

Don't log everything at the same log level. Most logging libraries offer several log levels, and you can enable certain levels at system startup. This provides a convenient control for log verbosity.

The classic levels are:
  • Debug - verbose and only useful while developing and/or debugging.
  • Info - the most popular level.
  • Warning - strange or unexpected states that are acceptable.
  • Error - something went wrong, but the process can recover.
  • Critical - the process cannot recover, and it will shutdown or restart.

Practically speaking, only two log configurations are needed:
  • Production - Every level is enabled except debug. If something goes wrong in production, the logs should reveal the cause.
  • Development & Debug - While developing new code or trying to reproduce a production issue, enable all levels.


Test Logs Are Important Too

Log quality is equally important in test and production code. When a test fails, the log should clearly show whether the failure was a problem with the test or production system. If it doesn't, then test logging is broken.

Test logs should always contain:
  • Test execution environment
  • Initial state
  • Setup steps
  • Test case steps
  • Interactions with the system
  • Expected results
  • Actual results
  • Teardown steps


Conditional Verbosity With Temporary Log Queues

When errors occur, the log should contain a lot of detail. Unfortunately, detail that led to an error is often unavailable once the error is encountered. Also, if you’ve followed advice about not logging too much, your log records prior to the error record may not provide adequate detail. A good way to solve this problem is to create temporary, in-memory log queues. Throughout processing of a transaction, append verbose details about each step to the queue. If the transaction completes successfully, discard the queue and log a summary. If an error is encountered, log the content of the entire queue and the error. This technique is especially useful for test logging of system interactions.


Failures and Flakiness Are Opportunities

When production problems occur, you’ll obviously be focused on finding and correcting the problem, but you should also think about the logs. If you have a hard time determining the cause of an error, it's a great opportunity to improve your logging. Before fixing the problem, fix your logging so that the logs clearly show the cause. If this problem ever happens again, it’ll be much easier to identify.

If you cannot reproduce the problem, or you have a flaky test, enhance the logs so that the problem can be tracked down when it happens again.

Using failures to improve logging should be used throughout the development process. While writing new code, try to refrain from using debuggers and only use the logs. Do the logs describe what is going on? If not, the logging is insufficient.


Might As Well Log Performance Data

Logged timing data can help debug performance issues. For example, it can be very difficult to determine the cause of a timeout in a large system, unless you can trace the time spent on every significant processing step. This can be easily accomplished by logging the start and finish times of calls that can take measurable time:
  • Significant system calls
  • Network requests
  • CPU intensive operations
  • Connected device interactions
  • Transactions


Following the Trail Through Many Threads and Processes

You should create unique identifiers for transactions that involve processing across many threads and/or processes. The initiator of the transaction should create the ID, and it should be passed to every component that performs work for the transaction. This ID should be logged by each component when logging information about the transaction. This makes it much easier to trace a specific transaction when many transactions are being processed concurrently.


Monitoring and Logging Complement Each Other

A production service should have both logging and monitoring. Monitoring provides a real-time statistical summary of the system state. It can alert you if a percentage of certain request types are failing, it is experiencing unusual traffic patterns, performance is degrading, or other anomalies occur. In some cases, this information alone will clue you to the cause of a problem. However, in most cases, a monitoring alert is simply a trigger for you to start an investigation. Monitoring shows the symptoms of problems. Logs provide details and state on individual transactions, so you can fully understand the cause of problems.

Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Anthony Vallone"
Send by mail Print  Save  Delicious 
Date: Thursday, 27 Mar 2014 14:40
by Anthony Vallone

When conducting interviews, I often get questions about our workspace and engineering environment. What IDEs do you use? What programming languages are most common? What kind of tools do you have for testing? What does the workspace look like?

Google is a company that is constantly pushing to improve itself. Just like software development itself, most environment improvements happen via a bottom-up approach. All engineers are responsible for fine-tuning, experimenting with, and improving our process, with a goal of eliminating barriers to creating products that amaze.

Office space and engineering equipment can have a considerable impact on productivity. I’ll focus on these areas of our work environment in this first article of a series on the topic.

Office layout

Google is a highly collaborative workplace, so the open floor plan suits our engineering process. Project teams composed of Software Engineers (SWEs), Software Engineers in Test (SETs), and Test Engineers (TEs) all sit near each other or in large rooms together. The test-focused engineers are involved in every step of the development process, so it’s critical for them to sit with the product developers. This keeps the lines of communication open.

Google Munich

The office space is far from rigid, and teams often rearrange desks to suit their preferences. The facilities team recently finished renovating a new floor in the New York City office, and after a day of engineering debates on optimal arrangements and white board diagrams, the floor was completely transformed.

Besides the main office areas, there are lounge areas to which Googlers go for a change of scenery or a little peace and quiet. If you are trying to avoid becoming a casualty of The Great Foam Dart War, lounges are a great place to hide.

Google Dublin

Working with remote teams

Google’s worldwide headquarters is in Mountain View, CA, but it’s a very global company, and our project teams are often distributed across multiple sites. To help keep teams well connected, most of our conference rooms have video conferencing equipment. We make frequent use of this equipment for team meetings, presentations, and quick chats.

Google Boston

What’s at your desk?

All engineers get high-end machines and have easy access to data center machines for running large tasks. A new member on my team recently mentioned that his Google machine has 16 times the memory of the machine at his previous company.

Most Google code runs on Linux, so the majority of development is done on Linux workstations. However, those that work on client code for Windows, OS X, or mobile, develop on relevant OSes. For displays, each engineer has a choice of either two 24 inch monitors or one 30 inch monitor. We also get our choice of laptop, picking from various models of Chromebook, MacBook, or Linux. These come in handy when going to meetings, lounges, or working remotely.

Google Zurich

Thoughts?

We are interested to hear your thoughts on this topic. Do you prefer an open-office layout, cubicles, or private offices? Should test teams be embedded with development teams, or should they operate separately? Do the benefits of offering engineers high-end equipment outweigh the costs?

(Continue to part 2)

Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Anthony Vallone, Jobs"
Send by mail Print  Save  Delicious 
Date: Thursday, 27 Mar 2014 14:40
by Anthony Vallone

This is the second in a series of articles about our work environment. See the first.

There are few things as frustrating as getting hampered in your work by a bug in a product you depend on. What if it’s a product developed by your company? Do you report/fix the issue or just work around it and hope it’ll go away soon? In this article, I’ll cover how and why Google dogfoods its own products.

Dogfooding

Google makes heavy use of its own products. We have a large ecosystem of development/office tools and use them for nearly everything we do. Because we use them on a daily basis, we can dogfood releases company-wide before launching to the public. These dogfood versions often have features unavailable to the public but may be less stable. Instability is exactly what you want in your tools, right? Or, would you rather that frustration be passed on to your company’s customers? Of course not!

Dogfooding is an important part of our test process. Test teams do their best to find problems before dogfooding, but we all know that testing is never perfect. We often get dogfood bug reports for edge and corner cases not initially covered by testing. We also get many comments about overall product quality and usability. This internal feedback has, on many occasions, changed product design.

Not surprisingly, test-focused engineers often have a lot to say during the dogfood phase. I don’t think there is a single public-facing product that I have not reported bugs on. I really appreciate the fact that I can provide feedback on so many products before release.

Interested in helping to test Google products? Many of our products have feedback links built-in. Some also have Beta releases available. For example, you can start using Chrome Beta and help us file bugs.

Office software

From system design documents, to test plans, to discussions about beer brewing techniques, our products are used internally. A company’s choice of office tools can have a big impact on productivity, and it is fortunate for Google that we have such a comprehensive suite. The tools have a consistently simple UI (no manual required), perform very well, encourage collaboration, and auto-save in the cloud. Now that I am used to these tools, I would certainly have a hard time going back to the tools of previous companies I have worked. I’m sure I would forget to click the save buttons for years to come.

Examples of frequently used tools by engineers:
  • Google Drive Apps (Docs, Sheets, Slides, etc.) are used for design documents, test plans, project data, data analysis, presentations, and more.
  • Gmail and Hangouts are used for email and chat.
  • Google Calendar is used to schedule all meetings, reserve conference rooms, and setup video conferencing using Hangouts.
  • Google Maps is used to map office floors.
  • Google Groups are used for email lists.
  • Google Sites are used to host team pages, engineering docs, and more.
  • Google App Engine hosts many corporate, development, and test apps.
  • Chrome is our primary browser on all platforms.
  • Google+ is used for organizing internal communities on topics such as food or C++, and for socializing.

Thoughts?

We are interested to hear your thoughts on this topic. Do you dogfood your company’s products? Do your office tools help or hinder your productivity? What office software and tools do you find invaluable for your job? Could you use Google Docs/Sheets for large test plans?

(Continue to part 3)
Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Anthony Vallone, Jobs"
Send by mail Print  Save  Delicious 
Date: Thursday, 27 Mar 2014 14:40
by Anthony Vallone

This is the third in a series of articles about our work environment. See the first and second.

I will never forget the awe I felt when running my first load test on my first project at Google. At previous companies I’ve worked, running a substantial load test took quite a bit of resource planning and preparation. At Google, I wrote less than 100 lines of code and was simulating tens of thousands of users after just minutes of prep work. The ease with which I was able to accomplish this is due to the impressive coding, building, and testing tools available at Google. In this article, I will discuss these tools and how they affect our test and development process.

Coding and building

The tools and process for coding and building make it very easy to change production and test code. Even though we are a large company, we have managed to remain nimble. In a matter of minutes or hours, you can edit, test, review, and submit code to head. We have achieved this without sacrificing code quality by heavily investing in tools, testing, and infrastructure, and by prioritizing code reviews.

Most production and test code is in a single, company-wide source control repository (open source projects like Chromium and Android have their own). There is a great deal of code sharing in the codebase, and this provides an incredible suite of code to build on. Most code is also in a single branch, so the majority of development is done at head. All code is also navigable, searchable, and editable from the browser. You’ll find code in numerous languages, but Java, C++, Python, Go, and JavaScript are the most common.

Have a strong preference for editor? Engineers are free to choose from many IDEs and editors. The most common are Eclipse, Emacs, Vim, and IntelliJ, but many others are used as well. Engineers that are passionate about their prefered editors have built up and shared some truly impressive editor plugins/tooling over the years.

Code reviews for all submissions are enforced via source control tooling. This also applies to test code, as our test code is held to the same standards as production code. The reviews are done via web-based code review tools that even include automatically generated test results. The process is very streamlined and efficient. Engineers can change and submit code in any part of the repository, but it must get reviewed by owners of the code being changed. This is great, because you can easily change code that your team depends on, rather than merely request a change to code you do not own.

The Google build system is used for building most code, and it is designed to work across many languages and platforms. It is remarkably simple to define and build targets. You won’t be needing that old Makefile book.

Running jobs and tests

We have some pretty amazing machine and job management tools at Google. There is a generally available pool of machines in many data centers around the globe. The job management service makes it very easy to start jobs on arbitrary machines in any of these data centers. Failing machines are automatically removed from the pool, so tests rarely fail due to machine issues. With a little effort, you can also set up monitoring and pager alerting for your important jobs.

From any machine you can spin up a massive number of tests and run them in parallel across many machines in the pool, via a single command. Each of these tests are run in a standard, isolated environment, so we rarely run into the “it works on my machine!” issue.

Before code is submitted, presubmit tests can be run that will find all tests that depend transitively on the change and run them. You can also define presubmit rules that run checks on a code change and verify that tests were run before allowing submission.

Once you’ve submitted test code, the build and test system automatically registers the test, and starts building/testing continuously. If the test starts failing, your team will get notification emails. You can also visit a test dashboard for your team and get details about test runs and test data. Monitoring the build/test status is made even easier with our build orbs designed and built by Googlers. These small devices will glow red if the build starts failing. Many teams have had fun customizing these orbs to various shapes, including a statue of liberty with a glowing torch.

Statue of LORBerty

Running larger integration and end-to-end tests takes a little more work, but we have some excellent tools to help with these tests as well: Integration test runners, hermetic environment creation, virtual machine service, web test frameworks, etc.

The impact

So how do these tools actually affect our productivity? For starters, the code is easy to find, edit, review, and submit. Engineers are free to choose tools that make them most productive. Before and after submission, running small tests is trivial, and running large tests is relatively easy. Since tests are easy to create and run, it’s fairly simple to maintain a green build, which most teams do most of the time. This allows us to spend more time on real problems and less on the things that shouldn’t even be problems. It allows us to focus on creating rigorous tests. It dramatically accelerates the development process that can prototype Gmail in a day and code/test/release service features on a daily schedule. And, of course, it lets us focus on the fun stuff.

Thoughts?

We are interested to hear your thoughts on this topic. Google has the resources to build tools like this, but would small or medium size companies benefit from a similar investment in its infrastructure? Did Google create the infrastructure or did the infrastructure create Google?

Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Anthony Vallone, Jobs"
Send by mail Print  Save  Delicious 
Date: Thursday, 27 Mar 2014 14:39


by Anthony Vallone

Unreproducible bugs are the bane of my existence. Far too often, I find a bug, report it, and hear back that it’s not a bug because it can’t be reproduced. Of course, the bug is still there, waiting to prey on its next victim. These types of bugs can be very expensive due to increased investigation time and overall lifetime. They can also have a damaging effect on product perception when users reporting these bugs are effectively ignored. We should be doing more to prevent them. In this article, I’ll go over some obvious, and maybe not so obvious, development/testing guidelines that can reduce the likelihood of these bugs from occurring.


Avoid and test for race conditions, deadlocks, timing issues, memory corruption, uninitialized memory access, memory leaks, and resource issues

I am lumping together many bug types in this section, but they are all related somewhat by how we test for them and how disproportionately hard they are to reproduce and debug. The root cause and effect can be separated by milliseconds or hours, and stack traces might be nonexistent or misleading. A system may fail in strange ways when exposed to unusual traffic spikes or insufficient resources. Race conditions and deadlocks may only be discovered during unique traffic patterns or resource configurations. Timing issues may only be noticed when many components are integrated and their performance parameters and failure/retry/timeout delays create a chaotic system. Memory corruption or uninitialized memory access may go unnoticed for a large percentage of calls but become fatal for rare states. Memory leaks may be negligible unless the system is exposed to load for an extended period of time.

Guidelines for development:

  • Simplify your synchronization logic. If it’s too hard to understand, it will be difficult to reproduce and debug complex concurrency problems.
  • Always obtain locks in the same order. This is a tried-and-true guideline to avoid deadlocks, but I still see code that breaks it periodically. Define an order for obtaining multiple locks and never change that order.
  • Don’t optimize by creating many fine-grained locks, unless you have verified that they are needed. Extra locks increase concurrency complexity.
  • Avoid shared memory, unless you truly need it. Shared memory access is very easy to get wrong, and the bugs may be quite difficult to reproduce.

Guidelines for testing:

  • Stress test your system regularly. You don't want to be surprised by unexpected failures when your system is under heavy load.
  • Test timeouts. Create tests that mock/fake dependencies to test timeout code. If your timeout code does something bad, it may cause a bug that only occurs under certain system conditions.
  • Test with debug and optimized builds. You may find that a well behaved debug build works fine, but the system fails in strange ways once optimized.
  • Test under constrained resources. Try reducing the number of data centers, machines, processes, threads, available disk space, or available memory. Also try simulating reduced network bandwidth.
  • Test for longevity. Some bugs require a long period of time to reveal themselves. For example, persistent data may become corrupt over time.
  • Use dynamic analysis tools like memory debuggers, ASan, TSan, and MSan regularly. They can help identify many categories of unreproducible memory/threading issues.


Enforce preconditions

I’ve seen many well-meaning functions with a high tolerance for bad input. For example, consider this function:

void ScheduleEvent(int timeDurationMilliseconds) {
if (timeDurationMilliseconds <= 0) {
timeDurationMilliseconds = 1;
}
...
}

This function is trying to help the calling code by adjusting the input to an acceptable value, but it may be doing damage by masking a bug. The calling code may be experiencing any number of problems described in this article, and passing garbage to this function will always work fine. The more functions that are written with this level of tolerance, the harder it is to trace back to the root cause, and the more likely it becomes that the end user will see garbage. Enforcing preconditions, for instance by using asserts, may actually cause a higher number of failures for new systems, but as systems mature, and many minor/major problems are identified early on, these checks can help improve long-term reliability.

Guidelines for development:

  • Enforce preconditions in your functions unless you have a good reason not to.


Use defensive programming

Defensive programming is another tried-and-true technique that is great at minimizing unreproducible bugs. If your code calls a dependency to do something, and that dependency quietly fails or returns garbage, how does your code handle it? You could test for situations like this via mocking or faking, but it’s even better to have your production code do sanity checking on its dependencies. For example:

double GetMonthlyLoanPayment() {
double rate = GetTodaysInterestRateFromExternalSystem();
if (rate < 0.001 || rate > 0.5) {
throw BadInterestRate(rate);
}
...
}

Guidelines for development:

  • When possible, use defensive programming to verify the work of your dependencies with known risks of failure like user-provided data, I/O operations, and RPC calls.

Guidelines for testing:

  • Use fuzz testing to test your systems hardiness when enduring bad data.


Don’t hide all errors from the user

There has been a trend in recent years toward hiding failures from users at all costs. In many cases, it makes perfect sense, but in some, we have gone overboard. Code that is very quiet and permissive during minor failures will allow an uninformed user to continue working in a failed state. The software may ultimately reach a fatal tipping point, and all the error conditions that led to failure have been ignored. If the user doesn’t know about the prior errors, they will not be able to report them, and you may not be able to reproduce them.

Guidelines for development:

  • Only hide errors from the user when you are certain that there is no impact to system state or the user.
  • Any error with impact to the user should be reported to the user with instructions for how to proceed. The information shown to the user, combined with data available to an engineer, should be enough to determine what went wrong.


Test error handling

The most common sections of code to remain untested is error handling code. Don’t skip test coverage here. Bad error handling code can cause unreproducible bugs and create great risk if it does not handle fatal errors well.

Guidelines for testing:

  • Always test your error handling code. This is usually best accomplished by mocking or faking the component triggering the error.
  • It’s also a good practice to examine your log quality for all types of error handling.


Check for duplicate keys

If unique identifiers or data access keys are generated using random data or are not guaranteed to be globally unique, duplicate keys may cause data corruption or concurrency issues. Key duplication bugs are very difficult to reproduce.

Guidelines for development:

  • Try to guarantee uniqueness of all keys.
  • When not possible to guarantee unique keys, check if the recently generated key is already in use before using it.
  • Watch out for potential race conditions here and avoid them with synchronization.


Test for concurrent data access

Some bugs only reveal themselves when multiple clients are reading/writing the same data. Your stress tests might be covering cases like these, but if they are not, you should have special tests for concurrent data access. Case like these are often unreproducible. For example, a user may have two instances of your app running against the same account, and they may not realize this when reporting a bug.

Guidelines for testing:

  • Always test for concurrent data access if it’s a feature of the system. Actually, even if it’s not a feature, verify that the system rejects it. Testing concurrency can be challenging. An approach that usually works for me is to create many worker threads that simultaneously attempt access and a master thread that monitors and verifies that some number of attempts were indeed concurrent, blocked or allowed as expected, and all were successful. Programmatic post-analysis of all attempts and changing system state may also be necessary to ensure that the system behaved well.


Steer clear of undefined behavior and non-deterministic access to data

Some APIs and basic operations have warnings about undefined behavior when in certain states or provided with certain input. Similarly, some data structures do not guarantee an iteration order (example: Java’s Set). Code that ignores these warnings may work fine most of the time but fail in unusual ways that are hard to reproduce.

Guidelines for development:

  • Understand when the APIs and operations you use might have undefined behavior and prevent those conditions.
  • Do not depend on data structure iteration order unless it is guaranteed. It is a common mistake to depend on the ordering of sets or associative arrays.


Log the details for errors or test failures

Issues described in this article can be easier to reproduce and debug when the logs contain enough detail to understand the conditions that led to an error.

Guidelines for development:

  • Follow good logging practices, especially in your error handling code.
  • If logs are stored on a user’s machine, create an easy way for them to provide you the logs.

Guidelines for testing:

  • Save your test logs for potential analysis later.


Anything to add?

Have I missed any important guidelines for minimizing these bugs? What is your favorite hard-to-reproduce bug that you discovered and resolved?

Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Anthony Vallone"
Send by mail Print  Save  Delicious 
Date: Tuesday, 18 Mar 2014 15:10
by Erik Kuefler

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

Unit tests are important tools for verifying that our code is correct. But writing good tests is about much more than just verifying correctness — a good unit test should exhibit several other properties in order to be readable and maintainable.

One property of a good test is clarity. Clarity means that a test should serve as readable documentation for humans, describing the code being tested in terms of its public APIs. Tests shouldn't refer directly to implementation details. The names of a class's tests should say everything the class does, and the tests themselves should serve as examples for how to use the class.

Two more important properties are completeness and conciseness. A test is complete when its body contains all of the information you need to understand it, and concise when it doesn't contain any other distracting information. This test fails on both counts:

@Test public void shouldPerformAddition() {
Calculator calculator = new Calculator(new RoundingStrategy(),
"unused", ENABLE_COSIN_FEATURE, 0.01, calculusEngine, false);
int result = calculator.doComputation(makeTestComputation());
assertEquals(5, result); // Where did this number come from?
}

Lots of distracting information is being passed to the constructor, and the important parts are hidden off in a helper method. The test can be made more complete by clarifying the purpose of the helper method, and more concise by using another helper to hide the irrelevant details of constructing the calculator:

@Test public void shouldPerformAddition() {
Calculator calculator = newCalculator();
int result = calculator.doComputation(makeAdditionComputation(2, 3));
assertEquals(5, result);
}

One final property of a good test is resilience. Once written, a resilient test doesn't have to change unless the purpose or behavior of the class being tested changes. Adding new behavior should only require adding new tests, not changing old ones. The original test above isn't resilient since you'll have to update it (and probably dozens of other tests!) whenever you add a new irrelevant constructor parameter. Moving these details into the helper method solved this problem.

Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Erik Kuefler, TotT"
Send by mail Print  Save  Delicious 
Date: Monday, 03 Mar 2014 13:19
by Hongfei Ding, Software Engineer, Shanghai

Mockito is a popular open source Java testing framework that allows the creation of mock objects. For example, we have the below interface used in our SUT (System Under Test):
interface Service {
Data get();
}

In our test, normally we want to fake the Service’s behavior to return canned data, so that the unit test can focus on testing the code that interacts with the Service. We use when-return clause to stub a method.
when(service.get()).thenReturn(cannedData);

But sometimes you need mock object behavior that's too complex for when-return. An Answer object can be a clean way to do this once you get the syntax right.

A common usage of Answer is to stub asynchronous methods that have callbacks. For example, we have mocked the interface below:
interface Service {
void get(Callback callback);
}

Here you’ll find that when-return is not that helpful anymore. Answer is the replacement. For example, we can emulate a success by calling the onSuccess function of the callback.
doAnswer(new Answer<Void>() {
public Void answer(InvocationOnMock invocation) {
Callback callback = (Callback) invocation.getArguments()[0];
callback.onSuccess(cannedData);
return null;
}
}).when(service).get(any(Callback.class));

Answer can also be used to make smarter stubs for synchronous methods. Smarter here means the stub can return a value depending on the input, rather than canned data. It’s sometimes quite useful. For example, we have mocked the Translator interface below:
interface Translator {
String translate(String msg);
}

We might choose to mock Translator to return a constant string and then assert the result. However, that test is not thorough, because the input to the translator function has been ignored. To improve this, we might capture the input and do extra verification, but then we start to fall into the “testing interaction rather than testing state” trap.

A good usage of Answer is to reverse the input message as a fake translation. So that both things are assured by checking the result string: 1) translate has been invoked, 2) the msg being translated is correct. Notice that this time we’ve used thenAnswer syntax, a twin of doAnswer, for stubbing a non-void method.
when(translator.translate(any(String.class))).thenAnswer(reverseMsg())
...
// extracted a method to put a descriptive name
private static Answer<String> reverseMsg() {
return new Answer<String>() {
public String answer(InvocationOnMock invocation) {
return reverseString((String) invocation.getArguments()[0]));
}
}
}

Last but not least, if you find yourself writing many nontrivial Answers, you should consider using a fake instead.

Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Hongfei Ding, Java"
Send by mail Print  Save  Delicious 
Date: Friday, 08 Nov 2013 10:59
by Patrik Höglund

The WebRTC project is all about enabling peer-to-peer video, voice and data transfer in the browser. To give our users the best possible experience we need to adapt the quality of the media to the bandwidth and processing power we have available. Our users encounter a wide variety of network conditions and run on a variety of devices, from powerful desktop machines with a wired broadband connection to laptops on WiFi to mobile phones on spotty 3G networks.

We want to ensure good quality for all these use cases in our implementation in Chrome. To some extent we can do this with manual testing, but the breakneck pace of Chrome development makes it very hard to keep up (several hundred patches land every day)! Therefore, we'd like to test the quality of our video and voice transfer with an automated test. Ideally, we’d like to test for the most common network scenarios our users encounter, but to start we chose to implement a test where we have plenty of CPU and bandwidth. This article covers how we built such a test.

Quality Metrics
First, we must define what we want to measure. For instance, the WebRTC video quality test uses peak signal-to-noise ratio and structural similarity to measure the quality of the video (or to be more precise, how much the output video differs from the input video; see this GTAC 13 talk for more details). The quality of the user experience is a subjective thing though. Arguably, one probably needs dozens of different metrics to really ensure a good user experience. For video, we would have to (at the very least) have some measure for frame rate and resolution besides correctness. To have the system send somewhat correct video frames seemed the most important though, which is why we chose the above metrics.

For this test we wanted to start with a similar correctness metric, but for audio. It turns out there's an algorithm called Perceptual Evaluation of Speech Quality (PESQ) which analyzes two audio files and tell you how similar they are, while taking into account how the human ear works (so it ignores differences a normal person would not hear anyway). That's great, since we want our metrics to measure the user experience as much as possible. There are many aspects of voice transfer you could measure, such as latency (which is really important for voice calls), but for now we'll focus on measuring how much a voice audio stream gets distorted by the transfer.

Feeding Audio Into WebRTC
In the WebRTC case we already had a test which would launch a Chrome browser, open two tabs, get the tabs talking to each other through a signaling server and set up a call on a single machine. Then we just needed to figure out how to feed a reference audio file into a WebRTC call and record what comes out on the other end. This part was actually harder than it sounds. The main WebRTC use case is that the web page acquires the user's mic through getUserMedia, sets up a PeerConnection with some remote peer and sends the audio from the mic through the connection to the peer where it is played in the peer's audio output device.



WebRTC calls transmit voice, video and data peer-to-peer, over the Internet.

But since this is an automated test, of course we could not have someone speak in a microphone every time the test runs; we had to feed in a known input file, so we had something to compare the recorded output audio against.

Could we duct-tape a small stereo to the mic and play our audio file on the stereo? That's not very maintainable or reliable, not to mention annoying for anyone in the vicinity. What about some kind of fake device driver which makes a microphone-like device appear on the device level? The problem with that is that it's hard to control a driver from the userspace test program. Also, the test will be more complex and flaky, and the driver interaction will not be portable.[1]

Instead, we chose to sidestep this problem. We used a solution where we load an audio file with WebAudio and play that straight into the peer connection through the WebAudio-PeerConnection integration. That way we start the playing of the file from the same renderer process as the call itself, which made it a lot easier to time the start and end of the file. We still needed to be careful to avoid playing the file too early or too late, so we don't clip the audio at the start or end - that would destroy our PESQ scores! - but it turned out to be a workable approach.[2]

Recording the Output
Alright, so now we could get a WebRTC call set up with a known audio file with decent control of when the file starts playing. Now we had to record the output. There are a number of possible solutions. The most end-to-end way is to straight up record what the system sends to default audio out (like speakers or headphones). Alternatively, we could write a hook in our application to dump our audio as late as possible, like when we're just about to send it to the sound card.

We went with the former. Our colleagues in the Chrome video stack team in Kirkland had already found that it's possible to configure a Windows or Linux machine to send the system's audio output (i.e. what plays on the speakers) to a virtual recording device. If we make that virtual recording device the default one, simply invoking SoundRecorder.exe and arecord respectively will record what the system is playing out.

They found this works well if one also uses the sox utility to eliminate silence around the actual audio content (recall we had some safety margins at both ends to ensure we record the whole input file as playing through the WebRTC call). We adopted the same approach, since it records what the user would hear, and yet uses only standard tools. This means we don't have to install additional software on the myriad machines that will run this test.[3]

Analyzing Audio
The only remaining step was to compare the silence-eliminated recording with the input file. When we first did this, we got a really bad score (like 2.0 out of 5.0, which means PESQ thinks it’s barely legible). This didn't seem to make sense, since both the input and recording sounded very similar. Turns out we didn’t think about the following:

  • We were comparing a full-band (24 kHz) input file to a wide-band (8 kHz) result (although both files were sampled at 48 kHz). This essentially amounted to a low pass filtering of the result file.
  • Both files were in stereo, but PESQ is only mono-aware.
  • The files were 32-bit, but the PESQ implementation is designed for 16 bits.

As you can see, it’s important to pay attention to what format arecord and SoundRecorder.exe records in, and make sure the input file is recorded in the same way. After correcting the input file and “rebasing”, we got the score up to about 4.0.[4]

Thus, we ended up with an automated test that runs continously on the torrent of Chrome change lists and protects WebRTC's ability to transmit sound. You can see the finished code here. With automated tests and cleverly chosen metrics you can protect against most regressions a user would notice. If your product includes video and audio handling, such a test is a great addition to your testing mix.


How the components of the test fit together.

Future work

  • It might be possible to write a Chrome extension which dumps the audio from Chrome to a file. That way we get a simpler-to-maintain and portable solution. It would be less end-to-end but more than worth it due to the simplified maintenance and setup. Also, the recording tools we use are not perfect and add some distortion, which makes the score less accurate.
  • There are other algorithms than PESQ to consider - for instance, POLQA is the successor to PESQ and is better at analyzing high-bandwidth audio signals.
  • We are working on a solution which will run this test under simulated network conditions. Simulated networks combined with this test is a really powerful way to test our behavior under various packet loss and delay scenarios and ensure we deliver a good experience to all our users, not just those with great broadband connections. Stay tuned for future articles on that topic!
  • Investigate feasibility of running this set-up on mobile devices.




1It would be tolerable if the driver was just looping the input file, eliminating the need for the test to control the driver (i.e. the test doesn't have to tell the driver to start playing the file). This is actually what we do in the video quality test. It's a much better fit to take this approach on the video side since each recorded video frame is independent of the others. We can easily embed barcodes into each frame and evaluate them independently.

This seems much harder for audio. We could possibly do audio watermarking, or we could embed a kind of start marker (for instance, using DTMF tones) in the first two seconds of the input file and play the real content after that, and then do some fancy audio processing on the receiving end to figure out the start and end of the input audio. We chose not to pursue this approach due to its complexity.

2Unfortunately, this also means we will not test the capturer path (which handles microphones, etc in WebRTC). This is an example of the frequent tradeoffs one has to do when designing an end-to-end test. Often we have to trade end-to-endness (how close the test is to the user experience) with robustness and simplicity of a test. It's not worth it to cover 5% more of the code if the test become unreliable or radically more expensive to maintain. Another example: A WebRTC call will generally involve two peers on different devices separated by the real-world internet. Writing such a test and making it reliable would be extremely difficult, so we make the test single-machine and hope we catch most of the bugs anyway.

3It's important to keep the continuous build setup simple and the build machines easy to configure - otherwise you will inevitably pay a heavy price in maintenance when you try to scale your testing up.

4When sending audio over the internet, we have to compress it since lossless audio consumes way too much bandwidth. WebRTC audio generally sounds great, but there's still compression artifacts if you listen closely (and, in fact, the recording tools are not perfect and add some distorsion as well). Given that this test is more about detecting regressions than measuring some absolute notion of quality, we'd like to downplay those artifacts. As our Kirkland colleagues found, one of the ways to do that is to "rebase" the input file. That means we start with a pristine recording, feed that through the WebRTC call and record what comes out on the other end. After manually verifying the quality, we use that as our input file for the actual test. In our case, it pushed our PESQ score up from 3 to about 4 (out of 5), which gives us a bit more sensitivity to regressions.

Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Patrik Höglund, WebRTC"
Send by mail Print  Save  Delicious 
Date: Friday, 18 Oct 2013 15:51
Cross-posted from the Android Developers Google+ Page

Earlier this year, we presented Espresso at GTAC as a solution to the UI testing problem. Today we are announcing the launch of the developer preview for Espresso!

The compelling thing about developing Espresso was making it easy and fun for developers to write reliable UI tests. Espresso has a small, predictable, and easy to learn API, which is still open for customization. But most importantly - Espresso removes the need to think about the complexity of multi-threaded testing. With Espresso, you can think procedurally and write concise, beautiful, and reliable Android UI tests quickly.

Espresso is now being used by over 30 applications within Google (Drive, Maps and G+, just to name a few). Starting from today, Espresso will also be available to our great developer community. We hope you will also enjoy testing your applications with Espresso and looking forward to your feedback and contributions!


Android Test Kit: https://code.google.com/p/android-test-kit/

Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Android, Espresso"
Send by mail Print  Save  Delicious 
Date: Friday, 30 Aug 2013 13:30
by Eduardo Bravo Ortiz

“Mobile first” is the motto these days for many companies. However, being able to test a mobile app in a meaningful way is very challenging. On the Google+ team we have had our share of trial and error that has led us to successful strategies for testing mobile applications on both iOS and Android.

General

  • Understand the platform. Testing on Android is not the same as testing on iOS. The testing tools and frameworks available for each platform are significantly different. (e.g., Android uses Java while iOS uses Objective-C, UI layouts are built differently on each platform, UI testing frameworks also work very differently in both platforms.)
  • Stabilize your test suite and test environments. Flaky tests are worse than having no tests, because a flaky test pollutes your build health and decreases the credibility of your suite.
  • Break down testing into manageable pieces. There are too many complex pieces when testing on mobile (e.g., emulator/device state, actions triggered by the OS).
  • Provide a hermetic test environment for your tests. Mobile UI tests are flaky by nature; don’t add more flakiness to them by having external dependencies.
  • Unit tests are the backbone of your mobile test strategy. Try to separate the app code logic from the UI as much as possible. This separation will make unit tests more granular and faster.


Android Testing

Unit Tests
Separating UI code from code logic is especially hard in Android. For example, an Activity is expected to act as a controller and view at the same time; make sure you keep this in mind when writing unit tests. Another useful recommendation is to decouple unit tests from the Android emulator, this will remove the need to build an APK and install it and your tests will run much faster. Robolectric is a perfect tool for this; it stubs the implementation of the Android platform while running tests.

Hermetic UI Tests
A hermetic UI test typically runs as a test without network calls or external dependencies. Once the tests can run in a hermetic environment, a white box testing framework like Espresso can simulate user actions on the UI and is tightly coupled to the app code. Espresso will also synchronize your tests actions with events on the UI thread, reducing flakiness. More information on Espresso is coming in a future Google Testing Blog article.

Diagram: Non-Hermetic Flow vs. Hermetic Flow


Monkey Tests
Monkey tests look for crashes and ANRs by stressing your Android application. They exercise pseudo-random events like clicks or gestures on the app under test. Monkey test results are reproducible to a certain extent; timing and latency are not completely under your control and can cause a test failure. Re-running the same monkey test against the same configuration will often reproduce these failures, though. If you run them daily against different SDKs, they are very effective at catching bugs earlier in the development cycle of a new release.

iOS Testing

Unit Tests
Unit test frameworks like OCUnit, which comes bundled with Xcode, or GTMSenTestcase are both good choices.

Hermetic UI Tests
KIF has proven to be a powerful solution for writing Objective-C UI tests. It runs in-process which allows tests to be more tightly coupled with the app under test, making the tests inherently more stable. KIF allows iOS developers to write tests using the same language as their application.

Following the same paradigm as Android UI tests, you want Objective-C tests to be hermetic. A good approach is to mock the server with pre-canned responses. Since KIF tests run in-process, responses can be built programmatically, making tests easier to maintain and more stable.

Monkey Tests
iOS has no equivalent native tool for writing monkey tests as Android does, however this type of test still adds value in iOS (e.g. we found 16 crashes in one of our recent Google+ releases). The Google+ team developed their own custom monkey testing framework, but there are also many third-party options available.

Backend Testing

A mobile testing strategy is not complete without testing the integration between server backends and mobile clients. This is especially true when the release cycles of the mobile clients and backends are very different. A replay test strategy can be very effective at preventing backends from breaking mobile clients. The theory behind this strategy is to simulate mobile clients by having a set of golden request and response files that are known to be correct. The replay test suite should then send golden requests to the backend server and assert that the response returned by the server matches the expected golden response. Since client/server responses are often not completely deterministic, you will need to utilize a diffing tool that can ignore expected differences.

To make this strategy successful you need a way to seed a repeatable data set on the backend and make all dependencies that are not relevant to your backend hermetic. Using in-memory servers with fake data or an RPC replay to external dependencies are good ways of achieving repeatable data sets and hermetic environments. Google+ mobile backend uses Guice for dependency injection, which allows us to easily swap out dependencies with fake implementations during testing and seed data fixtures.

Diagram: Normal flow vs Replay Tests flow


Conclusion

Mobile app testing can be very challenging, but building a comprehensive test strategy that understands the nature of different platforms and tools is the key to success. Providing a reliable and hermetic test environment is as important as the tests you write.

Finally, make sure you prioritize your automation efforts according to your team needs. This is how we prioritize on the Google+ team:
  1. Unit tests: These should be your first priority in either Android or iOS. They run fast and are less flaky than any other type of tests.
  2. Backend tests: Make sure your backend doesn’t break your mobile clients. Breakages are very likely to happen when the release cycle of mobile clients and backends are different.
  3. UI tests: These are slower by nature and flaky. They also take more time to write and maintain. Make sure you provide coverage for at least the critical paths of your app.
  4. Monkey tests: This is the final step to complete your mobile automation strategy.


Happy mobile testing from the Google+ team.
Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Eduardo Bravo Ortiz, Google+, Mobile"
Send by mail Print  Save  Delicious 
Date: Thursday, 22 Aug 2013 15:49
By Andrew Trenk

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

A test double is an object that can stand in for a real object in a test, similar to how a stunt double stands in for an actor in a movie. These are sometimes all commonly referred to as “mocks”, but it's important to distinguish between the different types of test doubles since they all have different uses. The most common types of test doubles are stubs, mocks, and fakes.

A stub has no logic, and only returns what you tell it to return. Stubs can be used when you need an object to return specific values in order to get your code under test into a certain state. While it's usually easy to write stubs by hand, using a mocking framework is often a convenient way to reduce boilerplate.

// Pass in a stub that was created by a mocking framework.
AccessManager accessManager = new AccessManager(stubAuthenticationService);
// The user shouldn't have access when the authentication service returns false.
when(stubAuthenticationService.isAuthenticated(USER_ID)).thenReturn(false);
assertFalse(accessManager.userHasAccess(USER_ID));
// The user should have access when the authentication service returns true.
when(stubAuthenticationService.isAuthenticated(USER_ID)).thenReturn(true);
assertTrue(accessManager.userHasAccess(USER_ID));

A mock has expectations about the way it should be called, and a test should fail if it’s not called that way. Mocks are used to test interactions between objects, and are useful in cases where there are no other visible state changes or return results that you can verify (e.g. if your code reads from disk and you want to ensure that it doesn't do more than one disk read, you can use a mock to verify that the method that does the read is only called once).

// Pass in a mock that was created by a mocking framework.
AccessManager accessManager = new AccessManager(mockAuthenticationService);
accessManager.userHasAccess(USER_ID);
// The test should fail if accessManager.userHasAccess(USER_ID) didn't call
// mockAuthenticationService.isAuthenticated(USER_ID) or if it called it more than once.
verify(mockAuthenticationService).isAuthenticated(USER_ID);

A fake doesn’t use a mocking framework: it’s a lightweight implementation of an API that behaves like the real implementation, but isn't suitable for production (e.g. an in-memory database). Fakes can be used when you can't use a real implementation in your test (e.g. if the real implementation is too slow or it talks over the network). You shouldn't need to write your own fakes often since fakes should usually be created and maintained by the person or team that owns the real implementation.

// Creating the fake is fast and easy.
AuthenticationService fakeAuthenticationService = new FakeAuthenticationService();
AccessManager accessManager = new AccessManager(fakeAuthenticationService);
// The user shouldn't have access since the authentication service doesn't
// know about the user.
assertFalse(accessManager.userHasAccess(USER_ID));
// The user should have access after it's added to the authentication service.
fakeAuthenticationService.addAuthenticatedUser(USER_ID);
assertTrue(accessManager.userHasAccess(USER_ID));

The term “test double” was coined by Gerard Meszaros in the book xUnit Test Patterns. You can find more information about test doubles in the book, or on the book’s website. You can also find a discussion about the different types of test doubles in this article by Martin Fowler.

Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Andrew Trenk, TotT"
Send by mail Print  Save  Delicious 
Date: Monday, 05 Aug 2013 13:04
By Andrew Trenk

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

Your trusty Calculator class is one of your most popular open source projects, with many happy users:

public class Calculator {
public int add(int a, int b) {
return a + b;
}
}

You also have tests to help ensure that it works properly:

public void testAdd() {
assertEquals(3, calculator.add(2, 1));
assertEquals(2, calculator.add(2, 0));
assertEquals(1, calculator.add(2, -1));
}

However, a fancy new library promises several orders of magnitude speedup in your code if you use it in place of the addition operator. You excitedly change your code to use this library:

public class Calculator {
private AdderFactory adderFactory;
public Calculator(AdderFactor adderFactory) { this.adderFactory = adderFactory; }
public int add(int a, int b) {
Adder adder = adderFactory.createAdder();
ReturnValue returnValue = adder.compute(new Number(a), new Number(b));
return returnValue.convertToInteger();
}
}

That was easy, but what do you do about the tests for this code? None of the existing tests should need to change since you only changed the code's implementation, but its user-facing behavior didn't change. In most cases, tests should focus on testing your code's public API, and your code's implementation details shouldn't need to be exposed to tests.

Tests that are independent of implementation details are easier to maintain since they don't need to be changed each time you make a change to the implementation. They're also easier to understand since they basically act as code samples that show all the different ways your class's methods can be used, so even someone who's not familiar with the implementation should usually be able to read through the tests to understand how to use the class.

There are many cases where you do want to test implementation details (e.g. you want to ensure that your implementation reads from a cache instead of from a datastore), but this should be less common since in most cases your tests should be independent of your implementation.

Note that test setup may need to change if the implementation changes (e.g. if you change your class to take a new dependency in its constructor, the test needs to pass in this dependency when it creates the class), but the actual test itself typically shouldn't need to change if the code's user-facing behavior doesn't change.
Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Andrew Trenk, TotT"
Send by mail Print  Save  Delicious 
Date: Friday, 28 Jun 2013 15:52
By Jonathan Rockway and Andrew Trenk

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

After many years of blogging, you decide to try out your blog platform's API. You start playing around with it, but then you realize: how can you tell that your code works without having your test talk to a remote blog server?

public void deletePostsWithTag(Tag tag) {
for (Post post : blogService.getAllPosts()) {
if (post.getTags().contains(tag)) { blogService.deletePost(post.getId()); }
}
}

Fakes to the rescue! A fake is a lightweight implementation of an API that behaves like the real implementation, but isn't suitable for production. In the case of the blog service, all you care about is the ability to retrieve and delete posts. While a real blog service would need a database and support multiple frontend servers, you don’t need any of that to test your code, all you need is any implementation of the blog service API. You can achieve this with a simple in-memory implementation:

public class FakeBlogService implements BlogService {  
private final Set<Post> posts = new HashSet<Post>(); // Store posts in memory
public void addPost(Post post) { posts.add(post); }
public void deletePost(int id) {
for (Post post : posts) {
if (post.getId() == id) { posts.remove(post); return; }
}
throw new PostNotFoundException("No post with ID " + id);
}
public Set<Post> getAllPosts() { return posts; }
}

Now your tests can swap out the real blog service with the fake and the code under test won't even know the difference.

Fakes are useful for when you can't use the real implementation in a test, such as if the real implementation is too slow (e.g. it takes several minutes to start up) or if it's non-deterministic (e.g. it talks to an external machine that may not be available when your test runs).

You shouldn't need to write your own fakes often since each fake should be created and maintained by the person or team that owns the real implementation. If you’re using an API that doesn't provide a fake, it’s often easy to create one yourself: write a wrapper around the part of the code that you can't use in your tests, and create a fake for that wrapper. Remember to create the fake at the lowest level possible (e.g. if you can't use a database in your tests, fake out the database instead of faking out all of your classes that talk to the database), that way you'll have fewer fakes to maintain, and your tests will be executing more real code for important parts of your system.

Fakes should have their own tests to ensure that they behave like the real implementation (e.g. if the real implementation throws an exception when given certain input, the fake implementation should also throw an exception when given the same input). One way to do this is to write tests against the API's public interface, and run those tests against both the real and fake implementations.

If you still don't fully trust that your code will work in production if all your tests use a fake, you can write a small number of integration tests to ensure that your code will work with the real implementation.
Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Andrew Trenk, Jonathan Rockway, TotT"
Send by mail Print  Save  Delicious 
Date: Friday, 28 Jun 2013 13:30
By Andrew Trenk

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

When writing tests for your code, it can seem easy to ignore your code's dependencies by mocking them out.

public void testCreditCardIsCharged() {
paymentProcessor = new PaymentProcessor(mockCreditCardServer);
when(mockCreditCardServer.isServerAvailable()).thenReturn(true);
when(mockCreditCardServer.beginTransaction()).thenReturn(mockTransactionManager);
when(mockTransactionManager.getTransaction()).thenReturn(transaction);
when(mockCreditCardServer.pay(transaction, creditCard, 500)).thenReturn(mockPayment);
when(mockPayment.isOverMaxBalance()).thenReturn(false);
paymentProcessor.processPayment(creditCard, Money.dollars(500));
verify(mockCreditCardServer).pay(transaction, creditCard, 500);
}

However, not using mocks can sometimes result in tests that are simpler and more useful.

public void testCreditCardIsCharged() {
paymentProcessor = new PaymentProcessor(creditCardServer);
paymentProcessor.processPayment(creditCard, Money.dollars(500));
assertEquals(500, creditCardServer.getMostRecentCharge(creditCard));
}

Overusing mocks can cause several problems:

- Tests can be harder to understand. Instead of just a straightforward usage of your code (e.g. pass in some values to the method under test and check the return result), you need to include extra code to tell the mocks how to behave. Having this extra code detracts from the actual intent of what you’re trying to test, and very often this code is hard to understand if you're not familiar with the implementation of the production code.

- Tests can be harder to maintain. When you tell a mock how to behave, you're leaking implementation details of your code into your test. When implementation details in your production code change, you'll need to update your tests to reflect these changes. Tests should typically know little about the code's implementation, and should focus on testing the code's public interface.

- Tests can provide less assurance that your code is working properly. When you tell a mock how to behave, the only assurance you get with your tests is that your code will work if your mocks behave exactly like your real implementations. This can be very hard to guarantee, and the problem gets worse as your code changes over time, as the behavior of the real implementations is likely to get out of sync with your mocks.

Some signs that you're overusing mocks are if you're mocking out more than one or two classes, or if one of your mocks specifies how more than one or two methods should behave. If you're trying to read a test that uses mocks and find yourself mentally stepping through the code being tested in order to understand the test, then you're probably overusing mocks.

Sometimes you can't use a real dependency in a test (e.g. if it's too slow or talks over the network), but there may better options than using mocks, such as a hermetic local server (e.g. a credit card server that you start up on your machine specifically for the test) or a fake implementation (e.g. an in-memory credit card server).

For more information about using hermetic servers, see http://googletesting.blogspot.com/2012/10/hermetic-servers.html. Stay tuned for a future Testing on the Toilet episode about using fake implementations.

Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Andrew Trenk, TotT"
Send by mail Print  Save  Delicious 
Date: Tuesday, 28 May 2013 15:12
By Anthony F. Voellm (aka Tony the @p3rfguy / G+) and Emily Bedont


On Wednesday, October 24th, while sitting under the Solar System, 30 software engineers from the Greater Seattle area came together at Google Kirkland to partake in the first ever Test Edition of Ship Wars. Ship Wars was created by two Google Waterloo engineers, Garret Kelly and Aaron Kemp, as a 20% project. Yes, 20% time does exist at Google!  The object of the game is to code a spaceship that will outperform all others in a virtual universe - algorithm vs algorithm.

The Kirkland event marked the 7th iteration of the program which was also recently done in NYC. Kirkland however was the first time that the game had been customized to encourage exploratory testing. In the case of "Ship Wars the Test Edition," we planted 4 bugs that the engineering participants were awarded for finding. Well, we ran out of prizes and were quickly reminded that when you put a lot of testing minded people in a room, many bugs will be unveiled! One of the best unveiled bugs was not one of the four planted in the simulator. When turning your ship 90 degrees, the ship actually turned -90 degrees. Oops!

Participants were encouraged to test their spaceship built on their own machine or a Google Chromebook. While the coding was done in the browser, the simulator and web server were run on Google Compute Engine. Throughout the 90 minutes, people challenged other participants to duels. Head-to-head battles took place on Chromebooks at the front of the room. There were many accolades called out but in the end, there could only be one champion who would walk away with a brand spankin’ new Nexus7. Check out our video of the evening’s activities.

Sounds fun, huh? We sure hope our participants, including our first place winner shown receiving the Nexus 7 from Garret, enjoyed the evening! Beyond the battles, our guests were introduced to the revived Google Testing Blog, heard firsthand that GTAC will be back in 2013, learned about testing at Google, and interacted with Googlers in a "Googley" environment. Achievement unlocked.

Special thanks to all the Googlers that supported the event!

Author: "Anthony Vallone (noreply@blogger.com)" Tags: "Kirkland, Tony Voellm"
Send by mail Print  Save  Delicious 
Date: Thursday, 09 May 2013 13:03
By Andrew Trenk
This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

There are typically two ways a unit test can verify that the code under test is working properly: by testing state or by testing interactions. What’s the difference between these?

Testing state means you're verifying that the code under test returns the right results.
public void testSortNumbers() {
NumberSorter numberSorter = new NumberSorter(quicksort, bubbleSort);
// Verify that the returned list is sorted. It doesn't matter which sorting
// algorithm is used, as long as the right result is returned.

assertEquals(
new ArrayList(1, 2, 3),
numberSorter.sortNumbers(new ArrayList(3, 1, 2)));
}
Testing interactions means you're verifying that the code under test calls certain methods properly.
public void testSortNumbers_quicksortIsUsed() {
// Pass in mocks to the class and call the method under test.
NumberSorter numberSorter = new NumberSorter(mockQuicksort, mockBubbleSort);
numberSorter.sortNumbers(new ArrayList(3, 1, 2));
// Verify that numberSorter.sortNumbers() used quicksort. The test should
// fail if mockQuicksort.sort() is never called or if it's called with the
// wrong arguments (e.g. if mockBubbleSort is used to sort the numbers).

verify(mockQuicksort).sort(new ArrayList(3, 1, 2));
}
The second test may result in good code coverage, but it doesn't tell you whether sorting works properly, only that quicksort.sort() was called. Just because a test that uses interactions is passing doesn't mean the code is working properly. This is why in most cases, you want to test state, not interactions.
In general, interactions should be tested when correctness doesn't just depend on what the code's output is, but also how the output is determined. In the above example, you would only want to test interactions in addition to testing state if it's important that quicksort is used (e.g. the method would run too slowly with a different sorting algorithm), otherwise the test using interactions is unnecessary.

What are some other examples of cases where you want to test interactions?
- The code under test calls a method where differences in the number or order of calls would cause undesired behavior, such as side effects (e.g. you only want one email to be sent), latency (e.g. you only want a certain number of disk reads to occur) or multithreading issues (e.g. your code will deadlock if it calls some methods in the wrong order). Testing interactions ensures that your tests will fail if these methods aren't called properly.
- You're testing a UI where the rendering details of the UI are abstracted away from the UI logic (e.g. using MVC or MVP). In tests for your controller/presenter, you only care that a certain method of the view was called, not what was actually rendered, so you can test interactions with the view. Similarly, when testing the view, you can test interactions with the controller/presenter.
Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "Andrew Trenk, TotT"
Send by mail Print  Save  Delicious 
Date: Saturday, 04 May 2013 10:44
by The GTAC Committee

The Google Test Automation Conference (GTAC) was held last week in NYC on April 23rd & 24th. The theme for this year's conference was focused on Mobile and Media. We were fortunate to have a cross section of attendees and presenters from industry and academia. This year’s talks focused on trends we are seeing in industry combined with compelling talks on tools and infrastructure that can have a direct impact on our products. We believe we achieved a conference that was focused for engineers by engineers. GTAC 2013 demonstrated that there is a strong trend toward the emergence of test engineering as a computer science discipline across companies and academia alike.

All of the slides, video recordings, and photos are now available on the GTAC site. Thank you to all the speakers and attendees who made this event spectacular. We are already looking forward to the next GTAC. If you have suggestions for next year’s location or theme, please comment on this post. To receive GTAC updates, subscribe to the Google Testing Blog.

Here are some responses to GTAC 2013:

“My first GTAC, and one of the best conferences of any kind I've ever been to. The talks were consistently great and the chance to interact with so many experts from all over the map was priceless.” - Gareth Bowles, Netflix

“Adding my own thanks as a speaker (and consumer of the material, I learned a lot from the other speakers) -- this was amazingly well run, and had facilities that I've seen many larger conferences not provide. I got everything I wanted from attending and more!” - James Waldrop, Twitter

“This was a wonderful conference. I learned so much in two days and met some great people. Can't wait to get back to Denver and use all this newly acquired knowledge!” - Crystal Preston-Watson, Ping Identity

“GTAC is hands down the smoothest conference/event I've attended. Well done to Google and all involved.” - Alister Scott, ThoughtWorks

“Thanks and compliments for an amazingly brain activity spurring event. I returned very inspired. First day back at work and the first thing I am doing is looking into improving our build automation and speed (1 min is too long. We are not building that much, groovy is dynamic).” - Irina Muchnik, Zynx Health

Author: "Google Testing Bloggers (noreply@blogger.com)" Tags: "GTAC"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader