Blame | Last modification | View Log | Download
# Advanced googletest Topics## IntroductionNow that you have read the [googletest Primer](primer.md) and learned how to writetests using googletest, it's time to learn some new tricks. This document willshow you more assertions as well as how to construct complex failure messages,propagate fatal failures, reuse and speed up your test fixtures, and use variousflags with your tests.## More AssertionsThis section covers some less frequently used, but still significant,assertions.### Explicit Success and FailureThese three assertions do not actually test a value or expression. Instead, theygenerate a success or failure directly. Like the macros that actually perform atest, you may stream a custom failure message into them.```c++SUCCEED();```Generates a success. This does **NOT** make the overall test succeed. A test isconsidered successful only if none of its assertions fail during its execution.NOTE: `SUCCEED()` is purely documentary and currently doesn't generate anyuser-visible output. However, we may add `SUCCEED()` messages to googletest'soutput in the future.```c++FAIL();ADD_FAILURE();ADD_FAILURE_AT("file_path", line_number);````FAIL()` generates a fatal failure, while `ADD_FAILURE()` and `ADD_FAILURE_AT()`generate a nonfatal failure. These are useful when control flow, rather than aBoolean expression, determines the test's success or failure. For example, youmight want to write something like:```c++switch(expression) {case 1:... some checks ...case 2:... some other checks ...default:FAIL() << "We shouldn't get here.";}```NOTE: you can only use `FAIL()` in functions that return `void`. See the[Assertion Placement section](#assertion-placement) for more information.**Availability**: Linux, Windows, Mac.### Exception AssertionsThese are for verifying that a piece of code throws (or does not throw) anexception of the given type:Fatal assertion | Nonfatal assertion | Verifies------------------------------------------ | ------------------------------------------ | --------`ASSERT_THROW(statement, exception_type);` | `EXPECT_THROW(statement, exception_type);` | `statement` throws an exception of the given type`ASSERT_ANY_THROW(statement);` | `EXPECT_ANY_THROW(statement);` | `statement` throws an exception of any type`ASSERT_NO_THROW(statement);` | `EXPECT_NO_THROW(statement);` | `statement` doesn't throw any exceptionExamples:```c++ASSERT_THROW(Foo(5), bar_exception);EXPECT_NO_THROW({int n = 5;Bar(&n);});```**Availability**: Linux, Windows, Mac; requires exceptions to be enabled in thebuild environment (note that `google3` **disables** exceptions).### Predicate Assertions for Better Error MessagesEven though googletest has a rich set of assertions, they can never be complete,as it's impossible (nor a good idea) to anticipate all scenarios a user mightrun into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check acomplex expression, for lack of a better macro. This has the problem of notshowing you the values of the parts of the expression, making it hard tounderstand what went wrong. As a workaround, some users choose to construct thefailure message by themselves, streaming it into `EXPECT_TRUE()`. However, thisis awkward especially when the expression has side-effects or is expensive toevaluate.googletest gives you three different options to solve this problem:#### Using an Existing Boolean FunctionIf you already have a function or functor that returns `bool` (or a type thatcan be implicitly converted to `bool`), you can use it in a *predicateassertion* to get the function arguments printed for free:| Fatal assertion | Nonfatal assertion | Verifies || ---------------------------------- | ---------------------------------- | --------------------------- || `ASSERT_PRED1(pred1, val1);` | `EXPECT_PRED1(pred1, val1);` | `pred1(val1)` is true || `ASSERT_PRED2(pred2, val1, val2);` | `EXPECT_PRED2(pred2, val1, val2);` | `pred2(val1, val2)` is true || `...` | `...` | ... |In the above, `predn` is an `n`-ary predicate function or functor, where `val1`,`val2`, ..., and `valn` are its arguments. The assertion succeeds if thepredicate returns `true` when applied to the given arguments, and failsotherwise. When the assertion fails, it prints the value of each argument. Ineither case, the arguments are evaluated exactly once.Here's an example. Given```c++// Returns true if m and n have no common divisors except 1.bool MutuallyPrime(int m, int n) { ... }const int a = 3;const int b = 4;const int c = 10;```the assertion```c++EXPECT_PRED2(MutuallyPrime, a, b);```will succeed, while the assertion```c++EXPECT_PRED2(MutuallyPrime, b, c);```will fail with the message```noneMutuallyPrime(b, c) is false, whereb is 4c is 10```> NOTE:>> 1. If you see a compiler error "no matching function to call" when using> `ASSERT_PRED*` or `EXPECT_PRED*`, please see> [this](faq.md#OverloadedPredicate) for how to resolve it.> 1. Currently we only provide predicate assertions of arity <= 5. If you need> a higher-arity assertion, let [us](https://github.com/google/googletest/issues) know.**Availability**: Linux, Windows, Mac.#### Using a Function That Returns an AssertionResultWhile `EXPECT_PRED*()` and friends are handy for a quick job, the syntax is notsatisfactory: you have to use different macros for different arities, and itfeels more like Lisp than C++. The `::testing::AssertionResult` class solvesthis problem.An `AssertionResult` object represents the result of an assertion (whether it'sa success or a failure, and an associated message). You can create an`AssertionResult` using one of these factory functions:```c++namespace testing {// Returns an AssertionResult object to indicate that an assertion has// succeeded.AssertionResult AssertionSuccess();// Returns an AssertionResult object to indicate that an assertion has// failed.AssertionResult AssertionFailure();}```You can then use the `<<` operator to stream messages to the `AssertionResult`object.To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`),write a predicate function that returns `AssertionResult` instead of `bool`. Forexample, if you define `IsEven()` as:```c++::testing::AssertionResult IsEven(int n) {if ((n % 2) == 0)return ::testing::AssertionSuccess();elsereturn ::testing::AssertionFailure() << n << " is odd";}```instead of:```c++bool IsEven(int n) {return (n % 2) == 0;}```the failed assertion `EXPECT_TRUE(IsEven(Fib(4)))` will print:```noneValue of: IsEven(Fib(4))Actual: false (3 is odd)Expected: true```instead of a more opaque```noneValue of: IsEven(Fib(4))Actual: falseExpected: true```If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE` as well(one third of Boolean assertions in the Google code base are negative ones), andare fine with making the predicate slower in the success case, you can supply asuccess message:```c++::testing::AssertionResult IsEven(int n) {if ((n % 2) == 0)return ::testing::AssertionSuccess() << n << " is even";elsereturn ::testing::AssertionFailure() << n << " is odd";}```Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print```noneValue of: IsEven(Fib(6))Actual: true (8 is even)Expected: false```**Availability**: Linux, Windows, Mac.#### Using a Predicate-FormatterIf you find the default message generated by `(ASSERT|EXPECT)_PRED*` and`(ASSERT|EXPECT)_(TRUE|FALSE)` unsatisfactory, or some arguments to yourpredicate do not support streaming to `ostream`, you can instead use thefollowing *predicate-formatter assertions* to *fully* customize how the messageis formatted:Fatal assertion | Nonfatal assertion | Verifies------------------------------------------------ | ------------------------------------------------ | --------`ASSERT_PRED_FORMAT1(pred_format1, val1);` | `EXPECT_PRED_FORMAT1(pred_format1, val1);` | `pred_format1(val1)` is successful`ASSERT_PRED_FORMAT2(pred_format2, val1, val2);` | `EXPECT_PRED_FORMAT2(pred_format2, val1, val2);` | `pred_format2(val1, val2)` is successful`...` | `...` | ...The difference between this and the previous group of macros is that instead ofa predicate, `(ASSERT|EXPECT)_PRED_FORMAT*` take a *predicate-formatter*(`pred_formatn`), which is a function or functor with the signature:```c++::testing::AssertionResult PredicateFormattern(const char* expr1,const char* expr2,...const char* exprn,T1 val1,T2 val2,...Tn valn);```where `val1`, `val2`, ..., and `valn` are the values of the predicate arguments,and `expr1`, `expr2`, ..., and `exprn` are the corresponding expressions as theyappear in the source code. The types `T1`, `T2`, ..., and `Tn` can be eithervalue types or reference types. For example, if an argument has type `Foo`, youcan declare it as either `Foo` or `const Foo&`, whichever is appropriate.As an example, let's improve the failure message in `MutuallyPrime()`, which wasused with `EXPECT_PRED2()`:```c++// Returns the smallest prime common divisor of m and n,// or 1 when m and n are mutually prime.int SmallestPrimeCommonDivisor(int m, int n) { ... }// A predicate-formatter for asserting that two integers are mutually prime.::testing::AssertionResult AssertMutuallyPrime(const char* m_expr,const char* n_expr,int m,int n) {if (MutuallyPrime(m, n)) return ::testing::AssertionSuccess();return ::testing::AssertionFailure() << m_expr << " and " << n_expr<< " (" << m << " and " << n << ") are not mutually prime, "<< "as they have a common divisor " << SmallestPrimeCommonDivisor(m, n);}```With this predicate-formatter, we can use```c++EXPECT_PRED_FORMAT2(AssertMutuallyPrime, b, c);```to generate the message```noneb and c (4 and 10) are not mutually prime, as they have a common divisor 2.```As you may have realized, many of the built-in assertions we introduced earlierare special cases of `(EXPECT|ASSERT)_PRED_FORMAT*`. In fact, most of them areindeed defined using `(EXPECT|ASSERT)_PRED_FORMAT*`.**Availability**: Linux, Windows, Mac.### Floating-Point ComparisonComparing floating-point numbers is tricky. Due to round-off errors, it is veryunlikely that two floating-points will match exactly. Therefore, `ASSERT_EQ` 'snaive comparison usually doesn't work. And since floating-points can have a widevalue range, no single fixed error bound works. It's better to compare by afixed relative error bound, except for values close to 0 due to the loss ofprecision there.In general, for floating-point comparison to make sense, the user needs tocarefully choose the error bound. If they don't want or care to, comparing interms of Units in the Last Place (ULPs) is a good default, and googletestprovides assertions to do this. Full details about ULPs are quite long; if youwant to learn more, see[here](https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/).#### Floating-Point Macros| Fatal assertion | Nonfatal assertion | Verifies || ------------------------------- | ------------------------------ | ---------------------------------------- || `ASSERT_FLOAT_EQ(val1, val2);` | `EXPECT_FLOAT_EQ(val1,val2);` | the two `float` values are almost equal || `ASSERT_DOUBLE_EQ(val1, val2);` | `EXPECT_DOUBLE_EQ(val1, val2);`| the two `double` values are almost equal |By "almost equal" we mean the values are within 4 ULP's from each other.NOTE: `CHECK_DOUBLE_EQ()` in `base/logging.h` uses a fixed absolute error bound,so its result may differ from that of the googletest macros. That macro isunsafe and has been deprecated. Please don't use it any more.The following assertions allow you to choose the acceptable error bound:| Fatal assertion | Nonfatal assertion | Verifies || ------------------------------------- | ------------------------------------- | ------------------------- || `ASSERT_NEAR(val1, val2, abs_error);` | `EXPECT_NEAR(val1, val2, abs_error);` | the difference between `val1` and `val2` doesn't exceed the given absolute error |**Availability**: Linux, Windows, Mac.#### Floating-Point Predicate-Format FunctionsSome floating-point operations are useful, but not that often used. In order toavoid an explosion of new macros, we provide them as predicate-format functionsthat can be used in predicate assertion macros (e.g. `EXPECT_PRED_FORMAT2`,etc).```c++EXPECT_PRED_FORMAT2(::testing::FloatLE, val1, val2);EXPECT_PRED_FORMAT2(::testing::DoubleLE, val1, val2);```Verifies that `val1` is less than, or almost equal to, `val2`. You can replace`EXPECT_PRED_FORMAT2` in the above table with `ASSERT_PRED_FORMAT2`.**Availability**: Linux, Windows, Mac.### Asserting Using gMock MatchersGoogle-developed C++ mocking framework [gMock](../../googlemock) comes with alibrary of matchers for validating arguments passed to mock objects. A gMock*matcher* is basically a predicate that knows how to describe itself. It can beused in these assertion macros:| Fatal assertion | Nonfatal assertion | Verifies || ------------------------------ | ------------------------------ | --------------------- || `ASSERT_THAT(value, matcher);` | `EXPECT_THAT(value, matcher);` | value matches matcher |For example, `StartsWith(prefix)` is a matcher that matches a string startingwith `prefix`, and you can write:```c++using ::testing::StartsWith;...// Verifies that Foo() returns a string starting with "Hello".EXPECT_THAT(Foo(), StartsWith("Hello"));```Read this [recipe](../../googlemock/docs/CookBook.md#using-matchers-in-google-test-assertions) inthe gMock Cookbook for more details.gMock has a rich set of matchers. You can do many things googletest cannot doalone with them. For a list of matchers gMock provides, read[this](../../googlemock/docs/CookBook.md#using-matchers). Especially useful among them aresome [protocol buffer matchers](https://github.com/google/nucleus/blob/master/nucleus/testing/protocol-buffer-matchers.h). It's easy to writeyour [own matchers](../../googlemock/docs/CookBook.md#writing-new-matchers-quickly) too.For example, you can use gMock's[EqualsProto](https://github.com/google/nucleus/blob/master/nucleus/testing/protocol-buffer-matchers.h)to compare protos in your tests:```c++#include "testing/base/public/gmock.h"using ::testing::EqualsProto;...EXPECT_THAT(actual_proto, EqualsProto("foo: 123 bar: 'xyz'"));EXPECT_THAT(*actual_proto_ptr, EqualsProto(expected_proto));```gMock is bundled with googletest, so you don't need to add any build dependencyin order to take advantage of this. Just include `"testing/base/public/gmock.h"`and you're ready to go.**Availability**: Linux, Windows, and Mac.### More String Assertions(Please read the [previous](#AssertThat) section first if you haven't.)You can use the gMock [string matchers](../../googlemock/docs/CheatSheet.md#string-matchers)with `EXPECT_THAT()` or `ASSERT_THAT()` to do more string comparison tricks(sub-string, prefix, suffix, regular expression, and etc). For example,```c++using ::testing::HasSubstr;using ::testing::MatchesRegex;...ASSERT_THAT(foo_string, HasSubstr("needle"));EXPECT_THAT(bar_string, MatchesRegex("\\w*\\d+"));```**Availability**: Linux, Windows, Mac.If the string contains a well-formed HTML or XML document, you can check whetherits DOM tree matches an [XPathexpression](http://www.w3.org/TR/xpath/#contents):```c++// Currently still in //template/prototemplate/testing:xpath_matcher#include "template/prototemplate/testing/xpath_matcher.h"using prototemplate::testing::MatchesXPath;EXPECT_THAT(html_string, MatchesXPath("//a[text()='click here']"));```**Availability**: Linux.### Windows HRESULT assertionsThese assertions test for `HRESULT` success or failure.Fatal assertion | Nonfatal assertion | Verifies-------------------------------------- | -------------------------------------- | --------`ASSERT_HRESULT_SUCCEEDED(expression)` | `EXPECT_HRESULT_SUCCEEDED(expression)` | `expression` is a success `HRESULT``ASSERT_HRESULT_FAILED(expression)` | `EXPECT_HRESULT_FAILED(expression)` | `expression` is a failure `HRESULT`The generated output contains the human-readable error message associated withthe `HRESULT` code returned by `expression`.You might use them like this:```c++CComPtr<IShellDispatch2> shell;ASSERT_HRESULT_SUCCEEDED(shell.CoCreateInstance(L"Shell.Application"));CComVariant empty;ASSERT_HRESULT_SUCCEEDED(shell->ShellExecute(CComBSTR(url), empty, empty, empty, empty));```**Availability**: Windows.### Type AssertionsYou can call the function```c++::testing::StaticAssertTypeEq<T1, T2>();```to assert that types `T1` and `T2` are the same. The function does nothing ifthe assertion is satisfied. If the types are different, the function call willfail to compile, and the compiler error message will likely (depending on thecompiler) show you the actual values of `T1` and `T2`. This is mainly usefulinside template code.**Caveat**: When used inside a member function of a class template or a functiontemplate, `StaticAssertTypeEq<T1, T2>()` is effective only if the function isinstantiated. For example, given:```c++template <typename T> class Foo {public:void Bar() { ::testing::StaticAssertTypeEq<int, T>(); }};```the code:```c++void Test1() { Foo<bool> foo; }```will not generate a compiler error, as `Foo<bool>::Bar()` is never actuallyinstantiated. Instead, you need:```c++void Test2() { Foo<bool> foo; foo.Bar(); }```to cause a compiler error.**Availability**: Linux, Windows, Mac.### Assertion PlacementYou can use assertions in any C++ function. In particular, it doesn't have to bea method of the test fixture class. The one constraint is that assertions thatgenerate a fatal failure (`FAIL*` and `ASSERT_*`) can only be used invoid-returning functions. This is a consequence of Google's not usingexceptions. By placing it in a non-void function you'll get a confusing compileerror like `"error: void value not ignored as it ought to be"` or `"cannotinitialize return object of type 'bool' with an rvalue of type 'void'"` or`"error: no viable conversion from 'void' to 'string'"`.If you need to use fatal assertions in a function that returns non-void, oneoption is to make the function return the value in an out parameter instead. Forexample, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. Youneed to make sure that `*result` contains some sensible value even when thefunction returns prematurely. As the function now returns `void`, you can useany assertion inside of it.If changing the function's type is not an option, you should just use assertionsthat generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`.NOTE: Constructors and destructors are not considered void-returning functions,according to the C++ language specification, and so you may not use fatalassertions in them. You'll get a compilation error if you try. A simpleworkaround is to transfer the entire body of the constructor or destructor to aprivate void-returning method. However, you should be aware that a fatalassertion failure in a constructor does not terminate the current test, as yourintuition might suggest; it merely returns from the constructor early, possiblyleaving your object in a partially-constructed state. Likewise, a fatalassertion failure in a destructor may leave your object in apartially-destructed state. Use assertions carefully in these situations!## Teaching googletest How to Print Your ValuesWhen a test assertion such as `EXPECT_EQ` fails, googletest prints the argumentvalues to help you debug. It does this using a user-extensible value printer.This printer knows how to print built-in C++ types, native arrays, STLcontainers, and any type that supports the `<<` operator. For other types, itprints the raw bytes in the value and hopes that you the user can figure it out.As mentioned earlier, the printer is *extensible*. That means you can teach itto do a better job at printing your particular type than to dump the bytes. Todo that, define `<<` for your type:```c++// Streams are allowed only for logging. Don't include this for// any other purpose.#include <ostream>namespace foo {class Bar { // We want googletest to be able to print instances of this....// Create a free inline friend function.friend std::ostream& operator<<(std::ostream& os, const Bar& bar) {return os << bar.DebugString(); // whatever needed to print bar to os}};// If you can't declare the function in the class it's important that the// << operator is defined in the SAME namespace that defines Bar. C++'s look-up// rules rely on that.std::ostream& operator<<(std::ostream& os, const Bar& bar) {return os << bar.DebugString(); // whatever needed to print bar to os}} // namespace foo```Sometimes, this might not be an option: your team may consider it bad style tohave a `<<` operator for `Bar`, or `Bar` may already have a `<<` operator thatdoesn't do what you want (and you cannot change it). If so, you can insteaddefine a `PrintTo()` function like this:```c++// Streams are allowed only for logging. Don't include this for// any other purpose.#include <ostream>namespace foo {class Bar {...friend void PrintTo(const Bar& bar, std::ostream* os) {*os << bar.DebugString(); // whatever needed to print bar to os}};// If you can't declare the function in the class it's important that PrintTo()// is defined in the SAME namespace that defines Bar. C++'s look-up rules rely// on that.void PrintTo(const Bar& bar, std::ostream* os) {*os << bar.DebugString(); // whatever needed to print bar to os}} // namespace foo```If you have defined both `<<` and `PrintTo()`, the latter will be used whengoogletest is concerned. This allows you to customize how the value appears ingoogletest's output without affecting code that relies on the behavior of its`<<` operator.If you want to print a value `x` using googletest's value printer yourself, justcall `::testing::PrintToString(x)`, which returns an `std::string`:```c++vector<pair<Bar, int> > bar_ints = GetBarIntVector();EXPECT_TRUE(IsCorrectBarIntVector(bar_ints))<< "bar_ints = " << ::testing::PrintToString(bar_ints);```## Death TestsIn many applications, there are assertions that can cause application failure ifa condition is not met. These sanity checks, which ensure that the program is ina known good state, are there to fail at the earliest possible time after someprogram state is corrupted. If the assertion checks the wrong condition, thenthe program may proceed in an erroneous state, which could lead to memorycorruption, security holes, or worse. Hence it is vitally important to test thatsuch assertion statements work as expected.Since these precondition checks cause the processes to die, we call such tests_death tests_. More generally, any test that checks that a program terminates(except by throwing an exception) in an expected fashion is also a death test.Note that if a piece of code throws an exception, we don't consider it "death"for the purpose of death tests, as the caller of the code could catch theexception and avoid the crash. If you want to verify exceptions thrown by yourcode, see [Exception Assertions](#exception-assertions).If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, seeCatching Failures### How to Write a Death Testgoogletest has the following macros to support death tests:Fatal assertion | Nonfatal assertion | Verifies---------------------------------------------- | ---------------------------------------------- | --------`ASSERT_DEATH(statement, regex);` | `EXPECT_DEATH(statement, regex);` | `statement` crashes with the given error`ASSERT_DEATH_IF_SUPPORTED(statement, regex);` | `EXPECT_DEATH_IF_SUPPORTED(statement, regex);` | if death tests are supported, verifies that `statement` crashes with the given error; otherwise verifies nothing`ASSERT_EXIT(statement, predicate, regex);` | `EXPECT_EXIT(statement, predicate, regex);` | `statement` exits with the given error, and its exit code matches `predicate`where `statement` is a statement that is expected to cause the process to die,`predicate` is a function or function object that evaluates an integer exitstatus, and `regex` is a (Perl) regular expression that the stderr output of`statement` is expected to match. Note that `statement` can be *any validstatement* (including *compound statement*) and doesn't have to be anexpression.As usual, the `ASSERT` variants abort the current test function, while the`EXPECT` variants do not.> NOTE: We use the word "crash" here to mean that the process terminates with a> *non-zero* exit status code. There are two possibilities: either the process> has called `exit()` or `_exit()` with a non-zero value, or it may be killed by> a signal.>> This means that if `*statement*` terminates the process with a 0 exit code, it> is *not* considered a crash by `EXPECT_DEATH`. Use `EXPECT_EXIT` instead if> this is the case, or if you want to restrict the exit code more precisely.A predicate here must accept an `int` and return a `bool`. The death testsucceeds only if the predicate returns `true`. googletest defines a fewpredicates that handle the most common cases:```c++::testing::ExitedWithCode(exit_code)```This expression is `true` if the program exited normally with the given exitcode.```c++::testing::KilledBySignal(signal_number) // Not available on Windows.```This expression is `true` if the program was killed by the given signal.The `*_DEATH` macros are convenient wrappers for `*_EXIT` that use a predicatethat verifies the process' exit code is non-zero.Note that a death test only cares about three things:1. does `statement` abort or exit the process?2. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit statussatisfy `predicate`? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`)is the exit status non-zero? And3. does the stderr output match `regex`?In particular, if `statement` generates an `ASSERT_*` or `EXPECT_*` failure, itwill **not** cause the death test to fail, as googletest assertions don't abortthe process.To write a death test, simply use one of the above macros inside your testfunction. For example,```c++TEST(MyDeathTest, Foo) {// This death test uses a compound statement.ASSERT_DEATH({int n = 5;Foo(&n);}, "Error on line .* of Foo()");}TEST(MyDeathTest, NormalExit) {EXPECT_EXIT(NormalExit(), ::testing::ExitedWithCode(0), "Success");}TEST(MyDeathTest, KillMyself) {EXPECT_EXIT(KillMyself(), ::testing::KilledBySignal(SIGKILL),"Sending myself unblockable signal");}```verifies that:* calling `Foo(5)` causes the process to die with the given error message,* calling `NormalExit()` causes the process to print `"Success"` to stderr andexit with exit code 0, and* calling `KillMyself()` kills the process with signal `SIGKILL`.The test function body may contain other assertions and statements as well, ifnecessary.### Death Test NamingIMPORTANT: We strongly recommend you to follow the convention of naming your**test case** (not test) `*DeathTest` when it contains a death test, asdemonstrated in the above example. The [Death Tests AndThreads](#death-tests-and-threads) section below explains why.If a test fixture class is shared by normal tests and death tests, you can use`using` or `typedef` to introduce an alias for the fixture class and avoidduplicating its code:```c++class FooTest : public ::testing::Test { ... };using FooDeathTest = FooTest;TEST_F(FooTest, DoesThis) {// normal test}TEST_F(FooDeathTest, DoesThat) {// death test}```**Availability**: Linux, Windows (requires MSVC 8.0 or above), Cygwin, and Mac### Regular Expression SyntaxOn POSIX systems (e.g. Linux, Cygwin, and Mac), googletest uses the[POSIX extended regular expression](http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html#tag_09_04)syntax. To learn about this syntax, you may want to read this[Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_Extended_Regular_Expressions).On Windows, googletest uses its own simple regular expression implementation. Itlacks many features. For example, we don't support union (`"x|y"`), grouping(`"(xy)"`), brackets (`"[xy]"`), and repetition count (`"x{5,7}"`), amongothers. Below is what we do support (`A` denotes a literal character, period(`.`), or a single `\\ ` escape sequence; `x` and `y` denote regularexpressions.):Expression | Meaning---------- | --------------------------------------------------------------`c` | matches any literal character `c``\\d` | matches any decimal digit`\\D` | matches any character that's not a decimal digit`\\f` | matches `\f``\\n` | matches `\n``\\r` | matches `\r``\\s` | matches any ASCII whitespace, including `\n``\\S` | matches any character that's not a whitespace`\\t` | matches `\t``\\v` | matches `\v``\\w` | matches any letter, `_`, or decimal digit`\\W` | matches any character that `\\w` doesn't match`\\c` | matches any literal character `c`, which must be a punctuation`.` | matches any single character except `\n``A?` | matches 0 or 1 occurrences of `A``A*` | matches 0 or many occurrences of `A``A+` | matches 1 or many occurrences of `A``^` | matches the beginning of a string (not that of each line)`$` | matches the end of a string (not that of each line)`xy` | matches `x` followed by `y`To help you determine which capability is available on your system, googletestdefines macros to govern which regular expression it is using. The macros are:<!--absl:google3-begin(google3-only)-->`GTEST_USES_PCRE=1`, or<!--absl:google3-end--> `GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. Ifyou want your death tests to work in all cases, you can either `#if` on thesemacros or use the more limited syntax only.### How It WorksUnder the hood, `ASSERT_EXIT()` spawns a new process and executes the death teststatement in that process. The details of how precisely that happens depend onthe platform and the variable ::testing::GTEST_FLAG(death_test_style) (which isinitialized from the command-line flag `--gtest_death_test_style`).* On POSIX systems, `fork()` (or `clone()` on Linux) is used to spawn thechild, after which:* If the variable's value is `"fast"`, the death test statement isimmediately executed.* If the variable's value is `"threadsafe"`, the child process re-executesthe unit test binary just as it was originally invoked, but with someextra flags to cause just the single death test under consideration tobe run.* On Windows, the child is spawned using the `CreateProcess()` API, andre-executes the binary to cause just the single death test underconsideration to be run - much like the `threadsafe` mode on POSIX.Other values for the variable are illegal and will cause the death test to fail.Currently, the flag's default value is"fast". However, we reservethe right to change it in the future. Therefore, your tests should not depend onthis. In either case, the parent process waits for the child process tocomplete, and checks that1. the child's exit status satisfies the predicate, and2. the child's stderr matches the regular expression.If the death test statement runs to completion without dying, the child processwill nonetheless terminate, and the assertion fails.### Death Tests And ThreadsThe reason for the two death test styles has to do with thread safety. Due towell-known problems with forking in the presence of threads, death tests shouldbe run in a single-threaded context. Sometimes, however, it isn't feasible toarrange that kind of environment. For example, statically-initialized modulesmay start threads before main is ever reached. Once threads have been created,it may be difficult or impossible to clean them up.googletest has three features intended to raise awareness of threading issues.1. A warning is emitted if multiple threads are running when a death test isencountered.2. Test cases with a name ending in "DeathTest" are run before all other tests.3. It uses `clone()` instead of `fork()` to spawn the child process on Linux(`clone()` is not available on Cygwin and Mac), as `fork()` is more likelyto cause the child to hang when the parent process has multiple threads.It's perfectly fine to create threads inside a death test statement; they areexecuted in a separate process and cannot affect the parent.### Death Test StylesThe "threadsafe" death test style was introduced in order to help mitigate therisks of testing in a possibly multithreaded environment. It trades increasedtest execution time (potentially dramatically so) for improved thread safety.The automated testing framework does not set the style flag. You can choose aparticular style of death tests by setting the flag programmatically:```c++testing::FLAGS_gtest_death_test_style="threadsafe"```You can do this in `main()` to set the style for all death tests in the binary,or in individual tests. Recall that flags are saved before running each test andrestored afterwards, so you need not do that yourself. For example:```c++int main(int argc, char** argv) {InitGoogle(argv[0], &argc, &argv, true);::testing::FLAGS_gtest_death_test_style = "fast";return RUN_ALL_TESTS();}TEST(MyDeathTest, TestOne) {::testing::FLAGS_gtest_death_test_style = "threadsafe";// This test is run in the "threadsafe" style:ASSERT_DEATH(ThisShouldDie(), "");}TEST(MyDeathTest, TestTwo) {// This test is run in the "fast" style:ASSERT_DEATH(ThisShouldDie(), "");}```### CaveatsThe `statement` argument of `ASSERT_EXIT()` can be any valid C++ statement. Ifit leaves the current function via a `return` statement or by throwing anexception, the death test is considered to have failed. Some googletest macrosmay return from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoidthem in `statement`.Since `statement` runs in the child process, any in-memory side effect (e.g.modifying a variable, releasing memory, etc) it causes will *not* be observablein the parent process. In particular, if you release memory in a death test,your program will fail the heap check as the parent process will never see thememory reclaimed. To solve this problem, you can1. try not to free memory in a death test;2. free the memory again in the parent process; or3. do not use the heap checker in your program.Due to an implementation detail, you cannot place multiple death test assertionson the same line; otherwise, compilation will fail with an unobvious errormessage.Despite the improved thread safety afforded by the "threadsafe" style of deathtest, thread problems such as deadlock are still possible in the presence ofhandlers registered with `pthread_atfork(3)`.## Using Assertions in Sub-routines### Adding Traces to AssertionsIf a test sub-routine is called from several places, when an assertion inside itfails, it can be hard to tell which invocation of the sub-routine the failure isfrom.You can alleviate this problem using extra logging or custom failure messages,but that usually clutters up your tests. A better solution is to use the`SCOPED_TRACE` macro or the `ScopedTrace` utility:```c++SCOPED_TRACE(message);ScopedTrace trace("file_path", line_number, message);```where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE`macro will cause the current file name, line number, and the given message to beadded in every failure message. `ScopedTrace` accepts explicit file name andline number in arguments, which is useful for writing test helpers. The effectwill be undone when the control leaves the current lexical scope.For example,```c++10: void Sub1(int n) {11: EXPECT_EQ(1, Bar(n));12: EXPECT_EQ(2, Bar(n + 1));13: }14:15: TEST(FooTest, Bar) {16: {17: SCOPED_TRACE("A"); // This trace point will be included in18: // every failure in this scope.19: Sub1(1);20: }21: // Now it won't.22: Sub1(9);23: }```could result in messages like these:```nonepath/to/foo_test.cc:11: FailureValue of: Bar(n)Expected: 1Actual: 2Trace:path/to/foo_test.cc:17: Apath/to/foo_test.cc:12: FailureValue of: Bar(n + 1)Expected: 2Actual: 3```Without the trace, it would've been difficult to know which invocation of`Sub1()` the two failures come from respectively. (You could addan extra message to each assertion in `Sub1()` to indicate the value of `n`, butthat's tedious.)Some tips on using `SCOPED_TRACE`:1. With a suitable message, it's often enough to use `SCOPED_TRACE` at thebeginning of a sub-routine, instead of at each call site.2. When calling sub-routines inside a loop, make the loop iterator part of themessage in `SCOPED_TRACE` such that you can know which iteration the failureis from.3. Sometimes the line number of the trace point is enough for identifying theparticular invocation of a sub-routine. In this case, you don't have tochoose a unique message for `SCOPED_TRACE`. You can simply use `""`.4. You can use `SCOPED_TRACE` in an inner scope when there is one in the outerscope. In this case, all active trace points will be included in the failuremessages, in reverse order they are encountered.5. The trace dump is clickable in Emacs - hit `return` on a line number andyou'll be taken to that line in the source file!**Availability**: Linux, Windows, Mac.### Propagating Fatal FailuresA common pitfall when using `ASSERT_*` and `FAIL*` is not understanding thatwhen they fail they only abort the _current function_, not the entire test. Forexample, the following test will segfault:```c++void Subroutine() {// Generates a fatal failure and aborts the current function.ASSERT_EQ(1, 2);// The following won't be executed....}TEST(FooTest, Bar) {Subroutine(); // The intended behavior is for the fatal failure// in Subroutine() to abort the entire test.// The actual behavior: the function goes on after Subroutine() returns.int* p = NULL;*p = 3; // Segfault!}```To alleviate this, googletest provides three different solutions. You could useeither exceptions, the `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the`HasFatalFailure()` function. They are described in the following twosubsections.#### Asserting on Subroutines with an exceptionThe following code can turn ASSERT-failure into an exception:```c++class ThrowListener : public testing::EmptyTestEventListener {void OnTestPartResult(const testing::TestPartResult& result) override {if (result.type() == testing::TestPartResult::kFatalFailure) {throw testing::AssertionException(result);}}};int main(int argc, char** argv) {...testing::UnitTest::GetInstance()->listeners().Append(new ThrowListener);return RUN_ALL_TESTS();}```This listener should be added after other listeners if you have any, otherwisethey won't see failed `OnTestPartResult`.#### Asserting on SubroutinesAs shown above, if your test calls a subroutine that has an `ASSERT_*` failurein it, the test will continue after the subroutine returns. This may not be whatyou want.Often people want fatal failures to propagate like exceptions. For thatgoogletest offers the following macros:Fatal assertion | Nonfatal assertion | Verifies------------------------------------- | ------------------------------------- | --------`ASSERT_NO_FATAL_FAILURE(statement);` | `EXPECT_NO_FATAL_FAILURE(statement);` | `statement` doesn't generate any new fatal failures in the current thread.Only failures in the thread that executes the assertion are checked to determinethe result of this type of assertions. If `statement` creates new threads,failures in these threads are ignored.Examples:```c++ASSERT_NO_FATAL_FAILURE(Foo());int i;EXPECT_NO_FATAL_FAILURE({i = Bar();});```**Availability**: Linux, Windows, Mac. Assertions from multiple threads arecurrently not supported on Windows.#### Checking for Failures in the Current Test`HasFatalFailure()` in the `::testing::Test` class returns `true` if anassertion in the current test has suffered a fatal failure. This allowsfunctions to catch fatal failures in a sub-routine and return early.```c++class Test {public:...static bool HasFatalFailure();};```The typical usage, which basically simulates the behavior of a thrown exception,is:```c++TEST(FooTest, Bar) {Subroutine();// Aborts if Subroutine() had a fatal failure.if (HasFatalFailure()) return;// The following won't be executed....}```If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a testfixture, you must add the `::testing::Test::` prefix, as in:```c++if (::testing::Test::HasFatalFailure()) return;```Similarly, `HasNonfatalFailure()` returns `true` if the current test has atleast one non-fatal failure, and `HasFailure()` returns `true` if the currenttest has at least one failure of either kind.**Availability**: Linux, Windows, Mac.## Logging Additional InformationIn your test code, you can call `RecordProperty("key", value)` to log additionalinformation, where `value` can be either a string or an `int`. The *last* valuerecorded for a key will be emitted to the [XML output](#generating-an-xml-report) if youspecify one. For example, the test```c++TEST_F(WidgetUsageTest, MinAndMaxWidgets) {RecordProperty("MaximumWidgets", ComputeMaxUsage());RecordProperty("MinimumWidgets", ComputeMinUsage());}```will output XML like this:```xml...<testcase name="MinAndMaxWidgets" status="run" time="0.006" classname="WidgetUsageTest" MaximumWidgets="12" MinimumWidgets="9" />...```> NOTE:>> * `RecordProperty()` is a static member of the `Test` class. Therefore it> needs to be prefixed with `::testing::Test::` if used outside of the> `TEST` body and the test fixture class.> * `*key*` must be a valid XML attribute name, and cannot conflict with the> ones already used by googletest (`name`, `status`, `time`, `classname`,> `type_param`, and `value_param`).> * Calling `RecordProperty()` outside of the lifespan of a test is allowed.> If it's called outside of a test but between a test case's> `SetUpTestCase()` and `TearDownTestCase()` methods, it will be attributed> to the XML element for the test case. If it's called outside of all test> cases (e.g. in a test environment), it will be attributed to the top-level> XML element.**Availability**: Linux, Windows, Mac.## Sharing Resources Between Tests in the Same Test Casegoogletest creates a new test fixture object for each test in order to maketests independent and easier to debug. However, sometimes tests use resourcesthat are expensive to set up, making the one-copy-per-test model prohibitivelyexpensive.If the tests don't change the resource, there's no harm in their sharing asingle resource copy. So, in addition to per-test set-up/tear-down, googletestalso supports per-test-case set-up/tear-down. To use it:1. In your test fixture class (say `FooTest` ), declare as `static` some membervariables to hold the shared resources.1. Outside your test fixture class (typically just below it), define thosemember variables, optionally giving them initial values.1. In the same test fixture class, define a `static void SetUpTestCase()`function (remember not to spell it as **`SetupTestCase`** with a small `u`!)to set up the shared resources and a `static void TearDownTestCase()`function to tear them down.That's it! googletest automatically calls `SetUpTestCase()` before running the*first test* in the `FooTest` test case (i.e. before creating the first`FooTest` object), and calls `TearDownTestCase()` after running the *last test*in it (i.e. after deleting the last `FooTest` object). In between, the tests canuse the shared resources.Remember that the test order is undefined, so your code can't depend on a testpreceding or following another. Also, the tests must either not modify the stateof any shared resource, or, if they do modify the state, they must restore thestate to its original value before passing control to the next test.Here's an example of per-test-case set-up and tear-down:```c++class FooTest : public ::testing::Test {protected:// Per-test-case set-up.// Called before the first test in this test case.// Can be omitted if not needed.static void SetUpTestCase() {shared_resource_ = new ...;}// Per-test-case tear-down.// Called after the last test in this test case.// Can be omitted if not needed.static void TearDownTestCase() {delete shared_resource_;shared_resource_ = NULL;}// You can define per-test set-up logic as usual.virtual void SetUp() { ... }// You can define per-test tear-down logic as usual.virtual void TearDown() { ... }// Some expensive resource shared by all tests.static T* shared_resource_;};T* FooTest::shared_resource_ = NULL;TEST_F(FooTest, Test1) {... you can refer to shared_resource_ here ...}TEST_F(FooTest, Test2) {... you can refer to shared_resource_ here ...}```NOTE: Though the above code declares `SetUpTestCase()` protected, it maysometimes be necessary to declare it public, such as when using it with`TEST_P`.**Availability**: Linux, Windows, Mac.## Global Set-Up and Tear-DownJust as you can do set-up and tear-down at the test level and the test caselevel, you can also do it at the test program level. Here's how.First, you subclass the `::testing::Environment` class to define a testenvironment, which knows how to set-up and tear-down:```c++class Environment {public:virtual ~Environment() {}// Override this to define how to set up the environment.virtual void SetUp() {}// Override this to define how to tear down the environment.virtual void TearDown() {}};```Then, you register an instance of your environment class with googletest bycalling the `::testing::AddGlobalTestEnvironment()` function:```c++Environment* AddGlobalTestEnvironment(Environment* env);```Now, when `RUN_ALL_TESTS()` is called, it first calls the `SetUp()` method ofthe environment object, then runs the tests if there was no fatal failures, andfinally calls `TearDown()` of the environment object.It's OK to register multiple environment objects. In this case, their `SetUp()`will be called in the order they are registered, and their `TearDown()` will becalled in the reverse order.Note that googletest takes ownership of the registered environment objects.Therefore **do not delete them** by yourself.You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is called,probably in `main()`. If you use `gtest_main`, you need to call this before`main()` starts for it to take effect. One way to do this is to define a globalvariable like this:```c++::testing::Environment* const foo_env =::testing::AddGlobalTestEnvironment(new FooEnvironment);```However, we strongly recommend you to write your own `main()` and call`AddGlobalTestEnvironment()` there, as relying on initialization of globalvariables makes the code harder to read and may cause problems when you registermultiple environments from different translation units and the environments havedependencies among them (remember that the compiler doesn't guarantee the orderin which global variables from different translation units are initialized).## Value-Parameterized Tests*Value-parameterized tests* allow you to test your code with differentparameters without writing multiple copies of the same test. This is useful in anumber of situations, for example:* You have a piece of code whose behavior is affected by one or morecommand-line flags. You want to make sure your code performs correctly forvarious values of those flags.* You want to test different implementations of an OO interface.* You want to test your code over various inputs (a.k.a. data-driven testing).This feature is easy to abuse, so please exercise your good sense when doingit!### How to Write Value-Parameterized TestsTo write value-parameterized tests, first you should define a fixture class. Itmust be derived from both `::testing::Test` and`::testing::WithParamInterface<T>` (the latter is a pure interface), where `T`is the type of your parameter values. For convenience, you can just derive thefixture class from `::testing::TestWithParam<T>`, which itself is derived fromboth `::testing::Test` and `::testing::WithParamInterface<T>`. `T` can be anycopyable type. If it's a raw pointer, you are responsible for managing thelifespan of the pointed values.NOTE: If your test fixture defines `SetUpTestCase()` or `TearDownTestCase()`they must be declared **public** rather than **protected** in order to use`TEST_P`.```c++class FooTest :public ::testing::TestWithParam<const char*> {// You can implement all the usual fixture class members here.// To access the test parameter, call GetParam() from class// TestWithParam<T>.};// Or, when you want to add parameters to a pre-existing fixture class:class BaseTest : public ::testing::Test {...};class BarTest : public BaseTest,public ::testing::WithParamInterface<const char*> {...};```Then, use the `TEST_P` macro to define as many test patterns using this fixtureas you want. The `_P` suffix is for "parameterized" or "pattern", whichever youprefer to think.```c++TEST_P(FooTest, DoesBlah) {// Inside a test, access the test parameter with the GetParam() method// of the TestWithParam<T> class:EXPECT_TRUE(foo.Blah(GetParam()));...}TEST_P(FooTest, HasBlahBlah) {...}```Finally, you can use `INSTANTIATE_TEST_CASE_P` to instantiate the test case withany set of parameters you want. googletest defines a number of functions forgenerating test parameters. They return what we call (surprise!) *parametergenerators*. Here is a summary of them, which are all in the `testing`namespace:| Parameter Generator | Behavior || ---------------------------- | ------------------------------------------- || `Range(begin, end [, step])` | Yields values `{begin, begin+step, begin+step+step, ...}`. The values do not include `end`. `step` defaults to 1. || `Values(v1, v2, ..., vN)` | Yields values `{v1, v2, ..., vN}`. || `ValuesIn(container)` and `ValuesIn(begin,end)` | Yields values from a C-style array, an STL-style container, or an iterator range `[begin, end)`. || `Bool()` | Yields sequence `{false, true}`. || `Combine(g1, g2, ..., gN)` | Yields all combinations (Cartesian product) as std\:\:tuples of the values generated by the `N` generators. |For more details, see the comments at the definitions of these functions.The following statement will instantiate tests from the `FooTest` test case eachwith parameter values `"meeny"`, `"miny"`, and `"moe"`.```c++INSTANTIATE_TEST_CASE_P(InstantiationName,FooTest,::testing::Values("meeny", "miny", "moe"));```NOTE: The code above must be placed at global or namespace scope, not atfunction scope.NOTE: Don't forget this step! If you do your test will silently pass, but noneof its cases will ever run!To distinguish different instances of the pattern (yes, you can instantiate itmore than once), the first argument to `INSTANTIATE_TEST_CASE_P` is a prefixthat will be added to the actual test case name. Remember to pick uniqueprefixes for different instantiations. The tests from the instantiation abovewill have these names:* `InstantiationName/FooTest.DoesBlah/0` for `"meeny"`* `InstantiationName/FooTest.DoesBlah/1` for `"miny"`* `InstantiationName/FooTest.DoesBlah/2` for `"moe"`* `InstantiationName/FooTest.HasBlahBlah/0` for `"meeny"`* `InstantiationName/FooTest.HasBlahBlah/1` for `"miny"`* `InstantiationName/FooTest.HasBlahBlah/2` for `"moe"`You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests).This statement will instantiate all tests from `FooTest` again, each withparameter values `"cat"` and `"dog"`:```c++const char* pets[] = {"cat", "dog"};INSTANTIATE_TEST_CASE_P(AnotherInstantiationName, FooTest,::testing::ValuesIn(pets));```The tests from the instantiation above will have these names:* `AnotherInstantiationName/FooTest.DoesBlah/0` for `"cat"`* `AnotherInstantiationName/FooTest.DoesBlah/1` for `"dog"`* `AnotherInstantiationName/FooTest.HasBlahBlah/0` for `"cat"`* `AnotherInstantiationName/FooTest.HasBlahBlah/1` for `"dog"`Please note that `INSTANTIATE_TEST_CASE_P` will instantiate *all* tests in thegiven test case, whether their definitions come before or *after* the`INSTANTIATE_TEST_CASE_P` statement.You can see sample7_unittest.cc and sample8_unittest.cc for more examples.**Availability**: Linux, Windows (requires MSVC 8.0 or above), Mac### Creating Value-Parameterized Abstract TestsIn the above, we define and instantiate `FooTest` in the *same* source file.Sometimes you may want to define value-parameterized tests in a library and letother people instantiate them later. This pattern is known as *abstract tests*.As an example of its application, when you are designing an interface you canwrite a standard suite of abstract tests (perhaps using a factory function asthe test parameter) that all implementations of the interface are expected topass. When someone implements the interface, they can instantiate your suite toget all the interface-conformance tests for free.To define abstract tests, you should organize your code like this:1. Put the definition of the parameterized test fixture class (e.g. `FooTest`)in a header file, say `foo_param_test.h`. Think of this as *declaring* yourabstract tests.1. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes`foo_param_test.h`. Think of this as *implementing* your abstract tests.Once they are defined, you can instantiate them by including `foo_param_test.h`,invoking `INSTANTIATE_TEST_CASE_P()`, and depending on the library target thatcontains `foo_param_test.cc`. You can instantiate the same abstract test casemultiple times, possibly in different source files.### Specifying Names for Value-Parameterized Test ParametersThe optional last argument to `INSTANTIATE_TEST_CASE_P()` allows the user tospecify a function or functor that generates custom test name suffixes based onthe test parameters. The function should accept one argument of type`testing::TestParamInfo<class ParamType>`, and return `std::string`.`testing::PrintToStringParamName` is a builtin test suffix generator thatreturns the value of `testing::PrintToString(GetParam())`. It does not work for`std::string` or C strings.NOTE: test names must be non-empty, unique, and may only contain ASCIIalphanumeric characters. In particular, they [should not containunderscores](https://g3doc.corp.google.com/third_party/googletest/googletest/g3doc/faq.md#no-underscores).```c++class MyTestCase : public testing::TestWithParam<int> {};TEST_P(MyTestCase, MyTest){std::cout << "Example Test Param: " << GetParam() << std::endl;}INSTANTIATE_TEST_CASE_P(MyGroup, MyTestCase, testing::Range(0, 10),testing::PrintToStringParamName());```## Typed Tests</id>Suppose you have multiple implementations of the same interface and want to makesure that all of them satisfy some common requirements. Or, you may have definedseveral types that are supposed to conform to the same "concept" and you want toverify it. In both cases, you want the same test logic repeated for differenttypes.While you can write one `TEST` or `TEST_F` for each type you want to test (andyou may even factor the test logic into a function template that you invoke fromthe `TEST`), it's tedious and doesn't scale: if you want `m` tests over `n`types, you'll end up writing `m*n` `TEST`s.*Typed tests* allow you to repeat the same test logic over a list of types. Youonly need to write the test logic once, although you must know the type listwhen writing typed tests. Here's how you do it:First, define a fixture class template. It should be parameterized by a type.Remember to derive it from `::testing::Test`:```c++template <typename T>class FooTest : public ::testing::Test {public:...typedef std::list<T> List;static T shared_;T value_;};```Next, associate a list of types with the test case, which will be repeated foreach type in the list:```c++using MyTypes = ::testing::Types<char, int, unsigned int>;TYPED_TEST_CASE(FooTest, MyTypes);```The type alias (`using` or `typedef`) is necessary for the `TYPED_TEST_CASE`macro to parse correctly. Otherwise the compiler will think that each comma inthe type list introduces a new macro argument.Then, use `TYPED_TEST()` instead of `TEST_F()` to define a typed test for thistest case. You can repeat this as many times as you want:```c++TYPED_TEST(FooTest, DoesBlah) {// Inside a test, refer to the special name TypeParam to get the type// parameter. Since we are inside a derived class template, C++ requires// us to visit the members of FooTest via 'this'.TypeParam n = this->value_;// To visit static members of the fixture, add the 'TestFixture::'// prefix.n += TestFixture::shared_;// To refer to typedefs in the fixture, add the 'typename TestFixture::'// prefix. The 'typename' is required to satisfy the compiler.typename TestFixture::List values;values.push_back(n);...}TYPED_TEST(FooTest, HasPropertyA) { ... }```You can see sample6_unittest.cc**Availability**: Linux, Windows (requires MSVC 8.0 or above), Mac## Type-Parameterized Tests*Type-parameterized tests* are like typed tests, except that they don't requireyou to know the list of types ahead of time. Instead, you can define the testlogic first and instantiate it with different type lists later. You can eveninstantiate it more than once in the same program.If you are designing an interface or concept, you can define a suite oftype-parameterized tests to verify properties that any valid implementation ofthe interface/concept should have. Then, the author of each implementation canjust instantiate the test suite with their type to verify that it conforms tothe requirements, without having to write similar tests repeatedly. Here's anexample:First, define a fixture class template, as we did with typed tests:```c++template <typename T>class FooTest : public ::testing::Test {...};```Next, declare that you will define a type-parameterized test case:```c++TYPED_TEST_CASE_P(FooTest);```Then, use `TYPED_TEST_P()` to define a type-parameterized test. You can repeatthis as many times as you want:```c++TYPED_TEST_P(FooTest, DoesBlah) {// Inside a test, refer to TypeParam to get the type parameter.TypeParam n = 0;...}TYPED_TEST_P(FooTest, HasPropertyA) { ... }```Now the tricky part: you need to register all test patterns using the`REGISTER_TYPED_TEST_CASE_P` macro before you can instantiate them. The firstargument of the macro is the test case name; the rest are the names of the testsin this test case:```c++REGISTER_TYPED_TEST_CASE_P(FooTest,DoesBlah, HasPropertyA);```Finally, you are free to instantiate the pattern with the types you want. If youput the above code in a header file, you can `#include` it in multiple C++source files and instantiate it multiple times.```c++typedef ::testing::Types<char, int, unsigned int> MyTypes;INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, MyTypes);```To distinguish different instances of the pattern, the first argument to the`INSTANTIATE_TYPED_TEST_CASE_P` macro is a prefix that will be added to theactual test case name. Remember to pick unique prefixes for different instances.In the special case where the type list contains only one type, you can writethat type directly without `::testing::Types<...>`, like this:```c++INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, int);```You can see `sample6_unittest.cc` for a complete example.**Availability**: Linux, Windows (requires MSVC 8.0 or above), Mac## Testing Private CodeIf you change your software's internal implementation, your tests should notbreak as long as the change is not observable by users. Therefore, **per theblack-box testing principle, most of the time you should test your code throughits public interfaces.****If you still find yourself needing to test internal implementation code,consider if there's a better design.** The desire to test internalimplementation is often a sign that the class is doing too much. Considerextracting an implementation class, and testing it. Then use that implementationclass in the original class.If you absolutely have to test non-public interface code though, you can. Thereare two cases to consider:* Static functions ( *not* the same as static member functions!) or unnamednamespaces, and* Private or protected class membersTo test them, we use the following special techniques:* Both static functions and definitions/declarations in an unnamed namespaceare only visible within the same translation unit. To test them, you can`#include` the entire `.cc` file being tested in your `*_test.cc` file.(including `.cc` files is not a good way to reuse code - you should not dothis in production code!)However, a better approach is to move the private code into the`foo::internal` namespace, where `foo` is the namespace your projectnormally uses, and put the private declarations in a `*-internal.h` file.Your production `.cc` files and your tests are allowed to include thisinternal header, but your clients are not. This way, you can fully test yourinternal implementation without leaking it to your clients.* Private class members are only accessible from within the class or byfriends. To access a class' private members, you can declare your testfixture as a friend to the class and define accessors in your fixture. Testsusing the fixture can then access the private members of your productionclass via the accessors in the fixture. Note that even though your fixtureis a friend to your production class, your tests are not automaticallyfriends to it, as they are technically defined in sub-classes of thefixture.Another way to test private members is to refactor them into animplementation class, which is then declared in a `*-internal.h` file. Yourclients aren't allowed to include this header but your tests can. Such iscalled the[Pimpl](https://www.gamedev.net/articles/programming/general-and-gameplay-programming/the-c-pimpl-r1794/)(Private Implementation) idiom.Or, you can declare an individual test as a friend of your class by addingthis line in the class body:```c++FRIEND_TEST(TestCaseName, TestName);```For example,```c++// foo.h#include "gtest/gtest_prod.h"class Foo {...private:FRIEND_TEST(FooTest, BarReturnsZeroOnNull);int Bar(void* x);};// foo_test.cc...TEST(FooTest, BarReturnsZeroOnNull) {Foo foo;EXPECT_EQ(0, foo.Bar(NULL)); // Uses Foo's private member Bar().}```Pay special attention when your class is defined in a namespace, as youshould define your test fixtures and tests in the same namespace if you wantthem to be friends of your class. For example, if the code to be testedlooks like:```c++namespace my_namespace {class Foo {friend class FooTest;FRIEND_TEST(FooTest, Bar);FRIEND_TEST(FooTest, Baz);... definition of the class Foo ...};} // namespace my_namespace```Your test code should be something like:```c++namespace my_namespace {class FooTest : public ::testing::Test {protected:...};TEST_F(FooTest, Bar) { ... }TEST_F(FooTest, Baz) { ... }} // namespace my_namespace```## "Catching" FailuresIf you are building a testing utility on top of googletest, you'll want to testyour utility. What framework would you use to test it? googletest, of course.The challenge is to verify that your testing utility reports failures correctly.In frameworks that report a failure by throwing an exception, you could catchthe exception and assert on it. But googletest doesn't use exceptions, so how dowe test that a piece of code generates an expected failure?gunit-spi.h contains some constructs to do this. After #including this header,you can use```c++EXPECT_FATAL_FAILURE(statement, substring);```to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in thecurrent thread whose message contains the given `substring`, or use```c++EXPECT_NONFATAL_FAILURE(statement, substring);```if you are expecting a non-fatal (e.g. `EXPECT_*`) failure.Only failures in the current thread are checked to determine the result of thistype of expectations. If `statement` creates new threads, failures in thesethreads are also ignored. If you want to catch failures in other threads aswell, use one of the following macros instead:```c++EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substring);EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substring);```NOTE: Assertions from multiple threads are currently not supported on Windows.For technical reasons, there are some caveats:1. You cannot stream a failure message to either macro.1. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot referencelocal non-static variables or non-static members of `this` object.1. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()()` cannot return avalue.## Getting the Current Test's NameSometimes a function may need to know the name of the currently running test.For example, you may be using the `SetUp()` method of your test fixture to setthe golden file name based on which test is running. The `::testing::TestInfo`class has this information:```c++namespace testing {class TestInfo {public:// Returns the test case name and the test name, respectively.//// Do NOT delete or free the return value - it's managed by the// TestInfo class.const char* test_case_name() const;const char* name() const;};}```To obtain a `TestInfo` object for the currently running test, call`current_test_info()` on the `UnitTest` singleton object:```c++// Gets information about the currently running test.// Do NOT delete the returned object - it's managed by the UnitTest class.const ::testing::TestInfo* const test_info =::testing::UnitTest::GetInstance()->current_test_info();printf("We are in test %s of test case %s.\n",test_info->name(),test_info->test_case_name());````current_test_info()` returns a null pointer if no test is running. Inparticular, you cannot find the test case name in `TestCaseSetUp()`,`TestCaseTearDown()` (where you know the test case name implicitly), orfunctions called from them.**Availability**: Linux, Windows, Mac.## Extending googletest by Handling Test Eventsgoogletest provides an **event listener API** to let you receive notificationsabout the progress of a test program and test failures. The events you canlisten to include the start and end of the test program, a test case, or a testmethod, among others. You may use this API to augment or replace the standardconsole output, replace the XML output, or provide a completely different formof output, such as a GUI or a database. You can also use test events ascheckpoints to implement a resource leak checker, for example.**Availability**: Linux, Windows, Mac.### Defining Event ListenersTo define a event listener, you subclass either testing::TestEventListener ortesting::EmptyTestEventListener The former is an (abstract) interface, where*each pure virtual method can be overridden to handle a test event* (Forexample, when a test starts, the `OnTestStart()` method will be called.). Thelatter provides an empty implementation of all methods in the interface, suchthat a subclass only needs to override the methods it cares about.When an event is fired, its context is passed to the handler function as anargument. The following argument types are used:* UnitTest reflects the state of the entire test program,* TestCase has information about a test case, which can contain one or moretests,* TestInfo contains the state of a test, and* TestPartResult represents the result of a test assertion.An event handler function can examine the argument it receives to find outinteresting information about the event and the test program's state.Here's an example:```c++class MinimalistPrinter : public ::testing::EmptyTestEventListener {// Called before a test starts.virtual void OnTestStart(const ::testing::TestInfo& test_info) {printf("*** Test %s.%s starting.\n",test_info.test_case_name(), test_info.name());}// Called after a failed assertion or a SUCCESS().virtual void OnTestPartResult(const ::testing::TestPartResult& test_part_result) {printf("%s in %s:%d\n%s\n",test_part_result.failed() ? "*** Failure" : "Success",test_part_result.file_name(),test_part_result.line_number(),test_part_result.summary());}// Called after a test ends.virtual void OnTestEnd(const ::testing::TestInfo& test_info) {printf("*** Test %s.%s ending.\n",test_info.test_case_name(), test_info.name());}};```### Using Event ListenersTo use the event listener you have defined, add an instance of it to thegoogletest event listener list (represented by class TestEventListeners - notethe "s" at the end of the name) in your `main()` function, before calling`RUN_ALL_TESTS()`:```c++int main(int argc, char** argv) {::testing::InitGoogleTest(&argc, argv);// Gets hold of the event listener list.::testing::TestEventListeners& listeners =::testing::UnitTest::GetInstance()->listeners();// Adds a listener to the end. googletest takes the ownership.listeners.Append(new MinimalistPrinter);return RUN_ALL_TESTS();}```There's only one problem: the default test result printer is still in effect, soits output will mingle with the output from your minimalist printer. To suppressthe default printer, just release it from the event listener list and delete it.You can do so by adding one line:```c++...delete listeners.Release(listeners.default_result_printer());listeners.Append(new MinimalistPrinter);return RUN_ALL_TESTS();```Now, sit back and enjoy a completely different output from your tests. For moredetails, you can read this sample9_unittest.ccYou may append more than one listener to the list. When an `On*Start()` or`OnTestPartResult()` event is fired, the listeners will receive it in the orderthey appear in the list (since new listeners are added to the end of the list,the default text printer and the default XML generator will receive the eventfirst). An `On*End()` event will be received by the listeners in the *reverse*order. This allows output by listeners added later to be framed by output fromlisteners added earlier.### Generating Failures in ListenersYou may use failure-raising macros (`EXPECT_*()`, `ASSERT_*()`, `FAIL()`, etc)when processing an event. There are some restrictions:1. You cannot generate any failure in `OnTestPartResult()` (otherwise it willcause `OnTestPartResult()` to be called recursively).1. A listener that handles `OnTestPartResult()` is not allowed to generate anyfailure.When you add listeners to the listener list, you should put listeners thathandle `OnTestPartResult()` *before* listeners that can generate failures. Thisensures that failures generated by the latter are attributed to the right testby the former.We have a sample of failure-raising listener sample10_unittest.cc## Running Test Programs: Advanced Optionsgoogletest test programs are ordinary executables. Once built, you can run themdirectly and affect their behavior via the following environment variablesand/or command line flags. For the flags to work, your programs must call`::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`.To see a list of supported flags and their usage, please run your test programwith the `--help` flag. You can also use `-h`, `-?`, or `/?` for short.If an option is specified both by an environment variable and by a flag, thelatter takes precedence.### Selecting Tests#### Listing Test NamesSometimes it is necessary to list the available tests in a program beforerunning them so that a filter may be applied if needed. Including the flag`--gtest_list_tests` overrides all other flags and lists tests in the followingformat:```noneTestCase1.TestName1TestName2TestCase2.TestName```None of the tests listed are actually run if the flag is provided. There is nocorresponding environment variable for this flag.**Availability**: Linux, Windows, Mac.#### Running a Subset of the TestsBy default, a googletest program runs all tests the user has defined. Sometimes,you want to run only a subset of the tests (e.g. for debugging or quicklyverifying a change). If you set the `GTEST_FILTER` environment variable or the`--gtest_filter` flag to a filter string, googletest will only run the testswhose full names (in the form of `TestCaseName.TestName`) match the filter.The format of a filter is a '`:`'-separated list of wildcard patterns (calledthe *positive patterns*) optionally followed by a '`-`' and another'`:`'-separated pattern list (called the *negative patterns*). A test matchesthe filter if and only if it matches any of the positive patterns but does notmatch any of the negative patterns.A pattern may contain `'*'` (matches any string) or `'?'` (matches any singlecharacter). For convenience, the filter`'*-NegativePatterns'` can be also written as `'-NegativePatterns'`.For example:* `./foo_test` Has no flag, and thus runs all its tests.* `./foo_test --gtest_filter=*` Also runs everything, due to the singlematch-everything `*` value.* `./foo_test --gtest_filter=FooTest.*` Runs everything in test case `FooTest`.* `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose fullname contains either `"Null"` or `"Constructor"` .* `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests.* `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in testcase `FooTest` except `FooTest.Bar`.* `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runseverything in test case `FooTest` except `FooTest.Bar` and everything intest case `BarTest` except `BarTest.Foo`.#### Temporarily Disabling TestsIf you have a broken test that you cannot fix right away, you can add the`DISABLED_` prefix to its name. This will exclude it from execution. This isbetter than commenting out the code or using `#if 0`, as disabled tests arestill compiled (and thus won't rot).If you need to disable all tests in a test case, you can either add `DISABLED_`to the front of the name of each test, or alternatively add it to the front ofthe test case name.For example, the following tests won't be run by googletest, even though theywill still be compiled:```c++// Tests that Foo does Abc.TEST(FooTest, DISABLED_DoesAbc) { ... }class DISABLED_BarTest : public ::testing::Test { ... };// Tests that Bar does Xyz.TEST_F(DISABLED_BarTest, DoesXyz) { ... }```NOTE: This feature should only be used for temporary pain-relief. You still haveto fix the disabled tests at a later date. As a reminder, googletest will printa banner warning you if a test program contains any disabled tests.TIP: You can easily count the number of disabled tests you have using `gsearch`and/or `grep`. This number can be used as a metric for improving your testquality.**Availability**: Linux, Windows, Mac.#### Temporarily Enabling Disabled TestsTo include disabled tests in test execution, just invoke the test program withthe `--gtest_also_run_disabled_tests` flag or set the`GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`.You can combine this with the `--gtest_filter` flag to further select whichdisabled tests to run.**Availability**: Linux, Windows, Mac.### Repeating the TestsOnce in a while you'll run into a test whose result is hit-or-miss. Perhaps itwill fail only 1% of the time, making it rather hard to reproduce the bug undera debugger. This can be a major source of frustration.The `--gtest_repeat` flag allows you to repeat all (or selected) test methods ina program many times. Hopefully, a flaky test will eventually fail and give youa chance to debug. Here's how to use it:```none$ foo_test --gtest_repeat=1000Repeat foo_test 1000 times and don't stop at failures.$ foo_test --gtest_repeat=-1A negative count means repeating forever.$ foo_test --gtest_repeat=1000 --gtest_break_on_failureRepeat foo_test 1000 times, stopping at the first failure. Thisis especially useful when running under a debugger: when the testfails, it will drop into the debugger and you can then inspectvariables and stacks.$ foo_test --gtest_repeat=1000 --gtest_filter=FooBar.*Repeat the tests whose name matches the filter 1000 times.```If your test program contains [global set-up/tear-down](#global-set-up-and-tear-down) code, itwill be repeated in each iteration as well, as the flakiness may be in it. Youcan also specify the repeat count by setting the `GTEST_REPEAT` environmentvariable.**Availability**: Linux, Windows, Mac.### Shuffling the TestsYou can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE`environment variable to `1`) to run the tests in a program in a random order.This helps to reveal bad dependencies between tests.By default, googletest uses a random seed calculated from the current time.Therefore you'll get a different order every time. The console output includesthe random seed value, such that you can reproduce an order-related test failurelater. To specify the random seed explicitly, use the `--gtest_random_seed=SEED`flag (or set the `GTEST_RANDOM_SEED` environment variable), where `SEED` is aninteger in the range [0, 99999]. The seed value 0 is special: it tellsgoogletest to do the default behavior of calculating the seed from the currenttime.If you combine this with `--gtest_repeat=N`, googletest will pick a differentrandom seed and re-shuffle the tests in each iteration.**Availability**: Linux, Windows, Mac.### Controlling Test Output#### Colored Terminal Outputgoogletest can use colors in its terminal output to make it easier to spot theimportant information:...<br/><span style="color:green">[----------]<span style="color:black"> 1 test from FooTest<br/><span style="color:green">[ RUN ]<span style="color:black"> FooTest.DoesAbc<br/><span style="color:green">[ OK ]<span style="color:black"> FooTest.DoesAbc<br/><span style="color:green">[----------]<span style="color:black"> 2 tests from BarTest<br/><span style="color:green">[ RUN ]<span style="color:black"> BarTest.HasXyzProperty<br/><span style="color:green">[ OK ]<span style="color:black"> BarTest.HasXyzProperty<br/><span style="color:green">[ RUN ]<span style="color:black"> BarTest.ReturnsTrueOnSuccess<br/>... some error messages ...<br/><span style="color:red">[ FAILED ] <span style="color:black">BarTest.ReturnsTrueOnSuccess<br/>...<br/><span style="color:green">[==========]<span style="color:black"> 30 tests from 14 test cases ran.<br/><span style="color:green">[ PASSED ]<span style="color:black"> 28 tests.<br/><span style="color:red">[ FAILED ]<span style="color:black"> 2 tests, listed below:<br/><span style="color:red">[ FAILED ]<span style="color:black"> BarTest.ReturnsTrueOnSuccess<br/><span style="color:red">[ FAILED ]<span style="color:black"> AnotherTest.DoesXyz<br/>2 FAILED TESTSYou can set the `GTEST_COLOR` environment variable or the `--gtest_color`command line flag to `yes`, `no`, or `auto` (the default) to enable colors,disable colors, or let googletest decide. When the value is `auto`, googletestwill use colors if and only if the output goes to a terminal and (on non-Windowsplatforms) the `TERM` environment variable is set to `xterm` or `xterm-color`.**Availability**: Linux, Windows, Mac.#### Suppressing the Elapsed TimeBy default, googletest prints the time it takes to run each test. To disablethat, run the test program with the `--gtest_print_time=0` command line flag, orset the GTEST_PRINT_TIME environment variable to `0`.**Availability**: Linux, Windows, Mac.#### Suppressing UTF-8 Text OutputIn case of assertion failures, googletest prints expected and actual values oftype `string` both as hex-encoded strings as well as in readable UTF-8 text ifthey contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8text because, for example, you don't have an UTF-8 compatible output medium, runthe test program with `--gtest_print_utf8=0` or set the `GTEST_PRINT_UTF8`environment variable to `0`.**Availability**: Linux, Windows, Mac.#### Generating an XML Reportgoogletest can emit a detailed XML report to a file in addition to its normaltextual output. The report contains the duration of each test, and thus can helpyou identify slow tests. The report is also used by the http://unittestdashboard to show per-test-method error messages.To generate the XML report, set the `GTEST_OUTPUT` environment variable or the`--gtest_output` flag to the string `"xml:path_to_output_file"`, which willcreate the file at the given location. You can also just use the string `"xml"`,in which case the output can be found in the `test_detail.xml` file in thecurrent directory.If you specify a directory (for example, `"xml:output/directory/"` on Linux or`"xml:output\directory\"` on Windows), googletest will create the XML file inthat directory, named after the test executable (e.g. `foo_test.xml` for testprogram `foo_test` or `foo_test.exe`). If the file already exists (perhaps leftover from a previous run), googletest will pick a different name (e.g.`foo_test_1.xml`) to avoid overwriting it.The report is based on the `junitreport` Ant task. Since that format wasoriginally intended for Java, a little interpretation is required to make itapply to googletest tests, as shown here:```xml<testsuites name="AllTests" ...><testsuite name="test_case_name" ...><testcase name="test_name" ...><failure message="..."/><failure message="..."/><failure message="..."/></testcase></testsuite></testsuites>```* The root `<testsuites>` element corresponds to the entire test program.* `<testsuite>` elements correspond to googletest test cases.* `<testcase>` elements correspond to googletest test functions.For instance, the following program```c++TEST(MathTest, Addition) { ... }TEST(MathTest, Subtraction) { ... }TEST(LogicTest, NonContradiction) { ... }```could generate this report:```xml<?xml version="1.0" encoding="UTF-8"?><testsuites tests="3" failures="1" errors="0" time="0.035" timestamp="2011-10-31T18:52:42" name="AllTests"><testsuite name="MathTest" tests="2" failures="1" errors="0" time="0.015"><testcase name="Addition" status="run" time="0.007" classname=""><failure message="Value of: add(1, 1)
 Actual: 3
Expected: 2" type="">...</failure><failure message="Value of: add(1, -1)
 Actual: 1
Expected: 0" type="">...</failure></testcase><testcase name="Subtraction" status="run" time="0.005" classname=""></testcase></testsuite><testsuite name="LogicTest" tests="1" failures="0" errors="0" time="0.005"><testcase name="NonContradiction" status="run" time="0.005" classname=""></testcase></testsuite></testsuites>```Things to note:* The `tests` attribute of a `<testsuites>` or `<testsuite>` element tells howmany test functions the googletest program or test case contains, while the`failures` attribute tells how many of them failed.* The `time` attribute expresses the duration of the test, test case, orentire test program in seconds.* The `timestamp` attribute records the local date and time of the testexecution.* Each `<failure>` element corresponds to a single failed googletestassertion.**Availability**: Linux, Windows, Mac.#### Generating an JSON Reportgoogletest can also emit a JSON report as an alternative format to XML. Togenerate the JSON report, set the `GTEST_OUTPUT` environment variable or the`--gtest_output` flag to the string `"json:path_to_output_file"`, which willcreate the file at the given location. You can also just use the string`"json"`, in which case the output can be found in the `test_detail.json` filein the current directory.The report format conforms to the following JSON Schema:```json{"$schema": "http://json-schema.org/schema#","type": "object","definitions": {"TestCase": {"type": "object","properties": {"name": { "type": "string" },"tests": { "type": "integer" },"failures": { "type": "integer" },"disabled": { "type": "integer" },"time": { "type": "string" },"testsuite": {"type": "array","items": {"$ref": "#/definitions/TestInfo"}}}},"TestInfo": {"type": "object","properties": {"name": { "type": "string" },"status": {"type": "string","enum": ["RUN", "NOTRUN"]},"time": { "type": "string" },"classname": { "type": "string" },"failures": {"type": "array","items": {"$ref": "#/definitions/Failure"}}}},"Failure": {"type": "object","properties": {"failures": { "type": "string" },"type": { "type": "string" }}}},"properties": {"tests": { "type": "integer" },"failures": { "type": "integer" },"disabled": { "type": "integer" },"errors": { "type": "integer" },"timestamp": {"type": "string","format": "date-time"},"time": { "type": "string" },"name": { "type": "string" },"testsuites": {"type": "array","items": {"$ref": "#/definitions/TestCase"}}}}```The report uses the format that conforms to the following Proto3 using the [JSONencoding](https://developers.google.com/protocol-buffers/docs/proto3#json):```protosyntax = "proto3";package googletest;import "google/protobuf/timestamp.proto";import "google/protobuf/duration.proto";message UnitTest {int32 tests = 1;int32 failures = 2;int32 disabled = 3;int32 errors = 4;google.protobuf.Timestamp timestamp = 5;google.protobuf.Duration time = 6;string name = 7;repeated TestCase testsuites = 8;}message TestCase {string name = 1;int32 tests = 2;int32 failures = 3;int32 disabled = 4;int32 errors = 5;google.protobuf.Duration time = 6;repeated TestInfo testsuite = 7;}message TestInfo {string name = 1;enum Status {RUN = 0;NOTRUN = 1;}Status status = 2;google.protobuf.Duration time = 3;string classname = 4;message Failure {string failures = 1;string type = 2;}repeated Failure failures = 5;}```For instance, the following program```c++TEST(MathTest, Addition) { ... }TEST(MathTest, Subtraction) { ... }TEST(LogicTest, NonContradiction) { ... }```could generate this report:```json{"tests": 3,"failures": 1,"errors": 0,"time": "0.035s","timestamp": "2011-10-31T18:52:42Z""name": "AllTests","testsuites": [{"name": "MathTest","tests": 2,"failures": 1,"errors": 0,"time": "0.015s","testsuite": [{"name": "Addition","status": "RUN","time": "0.007s","classname": "","failures": [{"message": "Value of: add(1, 1)\x0A Actual: 3\x0AExpected: 2","type": ""},{"message": "Value of: add(1, -1)\x0A Actual: 1\x0AExpected: 0","type": ""}]},{"name": "Subtraction","status": "RUN","time": "0.005s","classname": ""}]}{"name": "LogicTest","tests": 1,"failures": 0,"errors": 0,"time": "0.005s","testsuite": [{"name": "NonContradiction","status": "RUN","time": "0.005s","classname": ""}]}]}```IMPORTANT: The exact format of the JSON document is subject to change.**Availability**: Linux, Windows, Mac.### Controlling How Failures Are Reported#### Turning Assertion Failures into Break-PointsWhen running test programs under a debugger, it's very convenient if thedebugger can catch an assertion failure and automatically drop into interactivemode. googletest's *break-on-failure* mode supports this behavior.To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a valueother than `0` . Alternatively, you can use the `--gtest_break_on_failure`command line flag.**Availability**: Linux, Windows, Mac.#### Disabling Catching Test-Thrown Exceptionsgoogletest can be used either with or without exceptions enabled. If a testthrows a C++ exception or (on Windows) a structured exception (SEH), by defaultgoogletest catches it, reports it as a test failure, and continues with the nexttest method. This maximizes the coverage of a test run. Also, on Windows anuncaught exception will cause a pop-up window, so catching the exceptions allowsyou to run the tests automatically.When debugging the test failures, however, you may instead want the exceptionsto be handled by the debugger, such that you can examine the call stack when anexception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS`environment variable to `0`, or use the `--gtest_catch_exceptions=0` flag whenrunning the tests.**Availability**: Linux, Windows, Mac.