publishes the test results to github and generates code annotations. If someone breaks your tests in a pull-request you know where it came from. It supports a wide range of testing framework and is easy to use with continuous integration services. Including it can be as simple as one command:

 python < $(curl also provides badges, to tell you how many tests ran and scheduling of builds for custom CI systems.

GitHub reporting

Report.Ci takes you test results and generates a detailed test report on github, including annotations, so you know what went wrong.

Besides statistics, this includes annotations that will show up in pull request diffs.

See it in action.
Frameworks can handle input from 28 test frameworks of 9 programming languages.

This includes the most popular frameworks like JUnit, but also minor ones like Doctest for C++.

See a full list

You can also upload the output from 9 different tools (compilers or interpreters).

This will generate an annotated log an annotations for github..

See a full list
Annotation mapping

Since errors provided by the test framework normally point to the location of the test and not to the tested itself, provides facilities to move or duplicate annotations.

This can be done by a location given as a file and line or a semantic description, e.g. a function name.

See the docs

If you run your own CI system or your service does not have a github integration, has you covered.

You can schedule a job to github as queued, started or canceled, an upon completion post the test results through

See the docs

Badges. has badges - lots of them.

With your can not say more then "build has passed". Instead it can say passed in .

See the badge code generator

This is how annotations look on GitHub