Report.ci publishes the test results to github and generates code annotations. If someone breaks your tests in a pull-request you know where it came from. It supports a wide range of testing framework and is easy to use with continuous integration services. Including it can be as simple as one command:
python < $(curl https://report.ci/upload.py)
Report.ci also provides badges, to tell you how many tests ran and scheduling of builds for custom CI systems.
Report.Ci takes you test results and generates a detailed test report on github, including annotations, so you know what went wrong.
Besides statistics, this includes annotations that will show up in pull request diffs.
Report.ci can handle input from 28 test frameworks of 9 programming languages.
This includes the most popular frameworks like JUnit, but also minor ones like Doctest for C++.
You can also upload the output from 9 different tools (compilers or interpreters).
This will generate an annotated log an annotations for github..
Since errors provided by the test framework normally point to the location of the test and not to the tested itself, report.ci provides facilities to move or duplicate annotations.
This can be done by a location given as a file and line or a semantic description, e.g. a function name.
If you run your own CI system or your service does not have a github integration, report.ci has you covered.
You can schedule a job to github as queued, started or canceled, an upon completion post the test results through report.ci.
Badges. report.ci has badges - lots of them.
With report.ci your readme.md can not say more then "build has passed". Instead it can say passed in .