Here's a project with ~500 unit tests: https://github.com/jpdillingham/Soulseek.NET
Coverage was at 100% for a while but there's some refactoring going on at the moment.
Of particular interest are a few Adapter classes in the Tcp namespace, (TcpClientAdapter for instance) that demonstrate how to apply the Adapter pattern to classes that lack interfaces so they can be injected and therefore mocked in tests.
Hi!
One of the most known tools is pylint, this tool can provide a lot of things, not only syntax check but also unused imports/variables/arguments, code duplication and many others.
Add pylint -f html OxFA > $CIRCLE_ARTIFACTS/pylint.html
to your test->post section and you will have full report from pylint in "Artifacts" secion.
If you don’t want to have failed build because of pylint checking, just add some command after pylint that will return 0 exit code, like: pylint -f html OxFA > $CIRCLE_ARTIFACTS/pylint.html; echo “pylint done"
You can use service https://codecov.io/ and it will be ver easy, you just need to add few list to your config file:
test: post: - pip install codecov && codecov --token <token from codecov.io> - mv coverage/* $CIRCLE_ARTIFACTS/ - rm -rf coverage/ - rm -rf coverage/
I agree. There are a lot of packages that are not documented well enough. In addition there are likely many more that aren't tested well enough. I genuinely believe a package that is not both well tested (>99% coverage) and well documented (>99% public APIs), is probably not ready for production environments.
On the other hand, there excellent packages that go above and beyond. For example, the bloc library and ecosystem is extremely well documented and boasts 100% code coverage. Packages that don't have a test/
directory immediately indicated they are not production ready, even though could be an excellent look package.
To ensure good quality of code I just work on keeping complexity low and coupling low. Easily swapped components are better than tightly coupled ones, and smaller blocks are easier to test and fix.
For CI/CD for me, I use a variety of things. Github Actions that run lint/tests with coverage and spit out to codecov.io. I also have a deploy script that updates my sites and samples.
Testing strategies is basically to test the most critical code and try and hit 100% coverage or close to it. I don't test everything, but I test the most important parts thoroughly. It's really just a question of time investment.
For debugging/logging I use either the tools built into Flutter, or built-in features on my platform that give me insights.
Personally I handle Analytics in a variety of ways. I have Tracking widgets that can fire when they come into scope or go out of scope. I also have facades that I use to imperatively call tracking when I need to.
As for managing config/environments. I have a minimal runner app and features that live in modules underneath it. The config is in the runner, so if I need multiple environments I just set up multiple runners that take the environment config or switch up the services provided to the DI/Service locator frameworks I use.
Thought I would share this, as it should help validate the integrity of their shell script, before running it.
If your CI script is doing this:
bash <(curl -s https://codecov.io/bash)
Replace it with something like the following:
#/bin/bash FILENAME=codecov curl -s https://codecov.io/bash > $FILENAME CODECOV_VERSION=$(grep 'VERSION=\".*\"' codecov | cut -d'"' -f2); VALIDATION_FAILURE=0 for i in 1 256 512 do IS_DIFF=$(diff <(shasum -a $i $FILENAME) <(curl -s https://raw.githubusercontent.com/codecov/codecov-bash/$CODECOV_VERSION/SHA${i}SUM)) if [ -z "${IS_DIFF}" ]; then echo "Sha:" $i "passes validation." else VALIDATION_FAILURE=1 fi done if [ "${VALIDATION_FAILURE}" == 1 ]; then echo "Invalid Checksum Detected From Codecov. Quitting." exit 1 else echo "Starting Codecov." chmod +x codecov ./codecov fi rm -rf $FILENAME
Hey, thanks for the input. Good to know, we are on the right track with gcov. I'll give llvm-cov a look soon. Are they much different?
codecov.io looks nice, but I think it's not an option for us. We'd need to self-host for regulatory reasons and that would probably be too pricey...
I think gcov/llvm-cov are the way to go on the data source side, though I had much better experience with codecov.io in terms of Gitlab/GitHub integration, since it provides a decent web interface on top of providing a coverage diff comment on MRs/PRs.
This seems like more work than it needs to be. You can just run your tests with CTest and upload with codecov's bash script.
This is my entire setup:
- name: Test run: ctest -j6 -C Debug -T test --output-on-failure
- name: Upload Coverage run: | curl -s https://codecov.io/bash > upload.sh chmod +x upload.sh ./upload.sh -a "-r"
Hi folks, I am working on a self-hosted web application to manage coverage reports. It's an open source project written in Go, which is an alternative to services like Code Climate or Codecov.
The reason why this project created is that my company uses Gitea as Git service, but we cannot find a self-hosted service to collect test coverage reports.
The project is still at an early development stage, it supports Go
, Perl
and Python
,
which are the languages my company uses.
If you are interesting in this project, it's welcome to either raise an issue on GitHub or leave a comment here.
Thanks for looking!
Really small stuff. For example, I do else if (node.type === "TEXT)
and it complains I should use else
because it can't have another type, but I think this is easier to understand and that might change in the future as more types might be added/supported.
In the Node conversion, I have an else return null
because it needs to be exhaustive, but I still have to add a test for this. Actually, testing was hard, because the official Figma API is untestable. Someone made a library to mock the API, but that library is incomplete. So I need to re-declare the official class I want, only in tests, which is weird, but works.
Currently it is at 99.63% and 8k LOC (including tests which are about half). I'm not exactly sure it will ever reach 100%, since there are still many improvements to be made and tests are not exactly easy, but I'm very very happy with 99%. You can check the coverage here:
Nobody is going to argue against the merits of testing, but you shouldn't present them without mentioning the drawbacks. Ossification of a codebase against specification changes is a real issue. The time cost of excessive testing is not worth the reward in the same way that excessive optimisation is not worth the reward. "100% test coverage" can give false confidence in the codebase and psychologically prime you against seeing what could/should be obvious major errors that you would otherwise detect from a suspicious mindset.
Even the Python core is not tested with 100% coverage for these reasons.
Moving fast and breaking things creates more value and earns more money than OCD testing.
Thank you for your comment. I added code coverage module. And added flag codecove flag to github readme :). Link: https://codecov.io/gh/korpozim/texthelper1
Here's an example: https://codecov.io/gh/ravendb/ravendb-go-client/src/master/create_subscription_command.go
there are 3 lines of actual code that are not covered from 53 total lines but codecov shows coverage of 66.7%
Many thanks for your advice! You are correct that the WhiteboxTools executable is absolutely required. The R package would be useless if the WhiteboxTools executable is not downloaded. However, if the downloading process becomes interactive (i.e., users can download and unzip WhiteboxTools to any file path), I feel that the package would not pass any automated tests on CRAN. All the testing functions require the executable. The code coverage of the package would be down to 0%. This probably won't be accepted on CRAN as well.
My 99% unit tested static singleton controllers beg to differ.
HTTP Requests are singletons and rely on server state. Yea, you can instantiate them but then you have to pass an object around when you want to actually use it.
I've been looking at PSR-15.
The concept of passing around a ServerRequestInterface
object to a handler just so it can access a GET variable strikes me as an anti-pattern. Sure it becomes easier to test but at what performance cost?
Bit self promotion, but my very recent project https://github.com/aio-libs/aiozipkin has following things, which may be interesting from QA standpoint:
1) pyflakes and pycodestyle checker (with flake8 tool)
2) flake8-bugbear to find even more likely bugs and design problems
3) flake8-mypy and mypy static type checker
4) test coverage 96% https://codecov.io/gh/aio-libs/aiozipkin
5) flake8-quotes to force consistent quotes
6) pytest-sugar for nicer test reports https://travis-ci.org/aio-libs/aiozipkin/jobs/317774732
7) docker fixtures that spins servers before tests start and stop after
8) also python setup.py check --restructuredtext to make sure that project description has proper formatting
> In practice this has happened twice in recent months with point releases of Lodash alone.
Lodash is generally low risk. Most cases of things breaking come down to folks doing things outside the documented usage or supported environments.
> Protip: it's really hard to unit-test for browser support (arguably the biggest source of JS bugs going) and really easy to push out code into production that breaks when run in certain browsers
Things like Sauce Labs make it easy:
https://saucelabs.com/u/lodash
Code coverage is a rad thing too:
https://coveralls.io/github/lodash
> Yep. And even so it frequently pushes out releases that break in browsers, and has to do several hurried point-releases over the course of several days to fix the broken code in production.
I donno, browser related issues are pretty rare for Lodash.
If you have a project that is hitting issues you should pass it along.
> Intellij 15
Hi,
Thanks for your feedback. I tested it in eclipse and also with lots of requests in unit tests: https://codecov.io/github/eBay/parallec
Can you check if you use this? http://search.maven.org/#artifactdetails|io.parallec|parallec-core|0.9.0|
What is the error you get? Please help to submit an issue in github and we can discuss further. thanks!
Announcement text quote:
With the help of @ColdenCullen, Codecov now supports D language. You can easily upload your coverage reports and utilize our many features to enhance your workflow.
Writing tests for your code is important, no question. The results of your tests is simply pass or fail without proper coverage reports. Codecov makes it easy to upload coverage metrics to get more insight into how your tests are performing.
A must have is our Browser Extension that overlays coverage reports directly in Github's interface for a seamless experience and further insight into your code.
Unlimited public repos, free forever. Unlimited private repos only $5 a month.
Learn more at https://codecov.io View examples at https://github.com/codecov/example-d Questions and comments: Twitter: @codecov
Thank you and have a great day!
Steve and the Codecov Family
Codecov - https://codecov.io
Testing your products is essential for building successful features, one struggle in writing tests is not knowing what sections of the code were actually tested and which were not at all. Codecov is a solution that will become part of your development workflow by providing meaningful reports and statistics on your product and features. We have a great roadmap of exciting features but they all start with Code Coverage. Showing you what parts of your product are untested can give you great insight into how the product will perform or where bugs could be originating.
We are looking for feedback and investment. Thank you!