Friday, May 19, 2017

Simple and practical continuous delivery for your library

Below article was already shared with you on LinkedIn in early May. We are in middle May so let's refresh it and fill out any gaps! This article is about Mockito release tools, not so much about mocking Java code. Let's get started!

Imagine...


Imagine the world where you call pull in a new version of some Open Source library and not worry if it breaks compatibility. Imagine that you can submit a pull request to some project, have it reviewed timely, and have the new version with your fix available to you in minutes after your PR is merged. Imagine that for any dependency you consider upgrading, you can view its neatly and consistently maintained release notes. Imagine that you can set up practical Continuous Delivery automation in your project in minutes, by using a well behaving and documented Gradle plugin. Imagine that you can focus on code and features while the release management, versioning, publishing, release notes generation is taken care for you automagically.

This is the goal of the new Open Source project that me and a couple of friends started cooking together. We used to call it "mockito release tools" because it originates from the release automation goodness we developed for http://mockito.org project. Mockito is the most popular mocking framework for Java and it is used by 2M users.

We found a name that captures the our mission very well. "Shipkit" - a toolkit for shipping it! Check it out at http://shipkit.org

True North Star


The goal is to provide easy-to-setup Continuous Delivery tooling. We would like other teams to take advantage of rapid development, frictionless releases and semantic versioning just like we do it in Mockito. We plan to make our release tools generic and neatly documented. It will be a set of libraries and Gradle plugins that greatly simplify enabling Continuous Delivery for software libraries.

Beyond the North Star


The project is getting some steam and there are already 4 contributors. I wanted to shout out what we are doing because... we are getting more ambitious and the goals of the project are getting broader in scope. After all, if we don't want to conquer the world, then why bother? :)

We need help


If the project goals speak to your engineering heart join our efforts! You can help with
  • Implementing features - see issues marked with "please contribute" label.
  • Spreading the word of what we're doing, letting us know about other project with similar goals. You can our issue tracker for reaching out.

Thursday, February 2, 2017

Mockito Continuous Delivery Pipeline 2.0

What do you think about Mockito release model? This article shows the diagrams of proposed changes to Mockito continuous delivery pipeline. Those are high level diagrams. They lack comprehensive commentary yet, but I will add more context and explanations soon.

Current Mockito Continuous Delivery Pipeline (described in details on our wiki page):

Mockito Continuous Delivery Pipeline 1.0

Given community feedback (documented in issue 618) we proposed a plan to change Mockito's release cadence (documented in issue 911).

Proposed Mockito Continuous Delivery Pipeline:

Mockito Continuous Delivery Pipeline 2.0
Do you have feedback to share? Join the discussion in issue 911 or drop me a comment.

Tuesday, January 24, 2017

Clean tests produce clean code - strict stubbing in Mockito

Clean tests produce clean code. It’s close to impossible to write clean tests for dirty code.

Do you agree with above? If you do (which I sincerely hope so) read on about a brand new concept in Mockito, the “strict stubbing” feature. In a nutshell it provides:
  • better productivity by detecting incorrect stubbing early (covered in great detail in this article).
  • cleaner tests by eliminating unnecessary stubbings (somewhat covered, assuming it is obvious)
  • DRY (Don’t Repeat Yourself) test code by verifying stubbing automatically (sadly, not covered in this article, if you want to know about it more leave me a comment :).
Stubbing in Mockito is considered delightfully simple and intuitive:

//given
List mock = mock(List.class);
given(mock.get(3)).willReturn("ok");

//expect
assert mock.get(1) == null;
assert mock.get(3) == "ok";

"Things should be as simple as possible, but not simpler" said Albert Einstein. Mockito stubbing is very close to Enstein’s simplicity boundary. There are times when returning null or empty values when stubbing argument does not match causes a lot of pain to our users. The user can stare at the code not understanding why mocks don’t behave as they are configured to. It happens when mocks return nulls instead of stubbed values configured in the test. The user resorts to debugging and stepping program execution to figure out what’s going on.

I will stop here because I feel uncomfortable complaining about the API I designed myself… I hope that "stubbing argument mismatch” scenario does not impact your productivity too often and you still enjoy cutting some great tests with Mockito.

When you hit stubbing argument mismatch scenario with Mockito 1.x here are your resolution options:
  • reviewing code: double checking the arguments passed to mocked method in the test and in the code
  • debugging: stepping program execution, inspecting the value of arguments. Using debug statements
  • reducing complexity: temporarily removing logic to oversimplify the code, temporarily replacing stubbing arguments with permissive argument matchers like "any()"
  • leveraging Mockito API: Mockito.RETURNS_SMART_NULLS
  • migrating to latest Mockito! (finally I get my punchline)
Mockito 2.1 comes with JUnit Rule that prints warnings to the console when stubbing argument mismatch is detected. When you stub a method with some argument(s) but then the same method is invoked with different argument(s), a warning is printed. Mockito’s painstakingly comprehensive Javadoc documents this behavior in MockitoHint class. (BTW. MockitoHint is a marker interface, existing only for documentation and Javadoc linking. Isn’t it a neat idea?)

The reason Mockito produces a warning instead of an immediate test failure is because... it’s really a “warning" not an “error". Sometimes, it is actually desired to call the stubbed method many times, with different arguments, some of those args match and some don’t. Arguably, those scenarios are quite rare. Originally I even had an example here in this blog post describing this use case in more detail. Eventually, I got rid of the example, it was not compelling enough. Take my word for it: there are legit cases where stubbing argument mismatch is _not_ an error. I failed to produce an example for this use case, perhaps you can share an something from your tests?

Mockito 2.1 also contains improved behavior of JUnit Runner which detects unused stubbings and reports them as failures. This helps keeping the tests clean. This feature is not directly connected to stubbing argument mismatch scenario. However, it is a part of the same design thought - making Mockito stubbing stricter for cleaner tests and improved productivity.

Just like the JUnit Rule, the Runner also prints warnings on stubbing argument mismatch. Have you noticed those warnings? Do you find them useful? Here are the traits of the warning system:
  • Warnings are useful only when developers actually pay attention to the console output. 
  • Program output can interleave with the warnings. Given enough noise in the output the warnings will simply be lost.
  • Warnings can interleave with actual test failure which can lead to confusing test result. The user sees both: the warning and the error and tries to figure out what is the problem to fix.
To address the caveats with warnings, Mockito 2.3 comes with "strict stubbing" option for JUnit Rules. This feature makes the stubbing argument mismatch fail fast. The user immediately gets feedback that there something fishy about the stubbing in test _or_ the use of stubbed method in the production code. This new fail-fast behavior might be undesired for certain use cases therefore strict stubbing can be turned off per test. Strict stubbing is tentatively planned as default behavior for Mockito version 3 because we believe it improves productivity. We would really appreciate if you tried out strict stubbing and let us know how it works for you.

Strict stubbing with JUnit Rules:

@Rule public MockitoRule mockito = MockitoJUnit.rule().strictness(Strictness.STRICT_STUBS);

Mockito 2.5.2 adds strict stubbing support for JUnit Runner:

@RunWith(MockitoJUnitRunner.StrictStubs.class)
public class SomeTest {
  //...
}

Currently strict stubbing in Mockito is only available with JUnit Rule or JUnit Runner. This is not a great news for TestNG fans or users that for some unknown reason decided to not use Mockito’s JUnit support (Why not? It’s really good for you. Let us know why!). We currently work on adding strict stubbing support without the need to use JUnit Runner or JUnit Rule. It will unlock the feature to everybody. This feature will be merged within few days so it’s your last chance to provide feedback to the design note or the pull request.

If you got that far I can only thank you for reading. Let us know what you think about strictness in Mockito.

May your tests be blessed with clarity!

Sunday, February 1, 2015

enabling continuous deployment is not enough

One of the goals of my new team at LinkedIn is building a brand new platform for front end engineers so that they can deploy new features to production 3 times a day. However, we don't want to just "enable" other teams to deploy to prod daily. We want to automatically deploy. Always.

This is what I find critical in continuous delivery mind shift. Enabling (through automation) to go to production frequently is not enough. A team might be enabled to deploy to prod easily and automatically. However, only when the deployment is regular and automatic (for example: triggered on every push master - like in Mockito) then the change of team's habits, thinking and behavior truly begins. Team that is enabled to continuously deliver differs so much from the team that continuously delivers. At least, this those are my subjective observations :)

I chatted about it with Peter Niederweiser, the legendary creator of spock test framework. Spock is not continuously deployed at time of writing this post (this will change, right Peter!!!?? :). Publication of new Spock versions is more interesting than Mockito because the former needs to deal with groovy variants (compile, build and run against different groovy runtime, publish artifact variants, etc.). Given the complexity of the build process there is a temptation to automate everything to the point when all artifacts, documentation, release notes can be published via a one click. I'd say it's not enough. Enabling for continuous delivery is simply not as gratifying as doing it all the way through.

It's hard to explain what changes when the team does true continuous delivery. The attitude to a code commit is different. Commit messages get better. Tests are cleaner and deeper, more thoughtful. There is more vigor in battling flaky tests (fact of life: continous delivery == lots of tests == flaky tests). There is more energy to improve the automation even further. To make the automated release notes look better. So on, so on.

I'm on a mission to do continuous and automatic delivery and not merely enable it. I hope I managed to stir you up a bit, too :) Right now I'm lonely at Munich airport waiting for a connecting flight to Krakow. In a week, I'm moving my family over to Santa Clara CA. I got pretty agitated writing this post, hopefully the content makes sense. Keep in mind that it's my subjective experience.

Tuesday, January 6, 2015

Continuously delivering beta versions

I used to think that beta versions are lame. I associated “beta” with poor quality and with pushing validation to the users instead of ensuring high quality internally, through test driven development, clean design and excellent engineering practices. Now we are in progress of making Mockito 2.0 and we are using betas...

Beta version makes it possible to continuously deliver incompatible changes, incrementally. It is high quality: all tests pass, new coverage is meticulously added. Since there are several potentially incompatible changes, it would be hard to implement all of them in one go. We need more time and we still want to deliver continuously. Betas to the rescue!

It makes me think why making a new major version typically involves relatively significant effort. Why isn’t a major version release as simple as it should be? In theory one can develop a software library with high focus on compatibility: deprecate features and APIs with replacements, avoid incompatible changes at all cost, etc. In this mode, releasing a major version could be as simple as removing deprecated code, turning on some new behavior, changing the default settings and that’s it. A few commits and a single push. In practice, it is never that simple. For example, it might be costly to keep old and new behavior in parallel so that it can be toggled in the next major version. In this case, the authors may choose to remove the previous behavior and implement a new one as a part of major version implementation effort. In some cases, it might be actually the best for the end user because he could get the new features faster.

It makes me wonder about developing a software library that is always backwards compatible. I’m afraid it wouldn’t work. It feels that breaking changes are needed from time to time. Otherwise, the development can be slowed down or even paralyzed. Old features, APIs may stand in the way of innovation. It may become hard to tackle new important and challenging use cases. Industry does change, competition is restless, going after new ideas and use cases is unavoidable.

Being always compatible is often impractical. Major version releases typically involve extra effort. The latter bothers me. I’m going to work hard to avoid that in future. Continuous delivery makes releases hassle-free part of every-day routine. Semantic versioning and focus on compatibility make major version releases somewhat challenging and totally interesting. Hassle-free major version releases, on a regular basis, indicate very mature continuous delivery model. I want to be there.

Saturday, December 13, 2014

New day, new Mockito release

Mockito 1.10.15 that I have just released contains an interesting plugin for android testing. “I have released” is probably not the right phrase - I have coded for some time, pushed the changes to the master, went to sleep, this morning I took a look and I saw a new version published. How can I not love continuous deployment?

The full build and the release is relatively fast. Technically, I could have waited a couple of minutes before going to sleep. However, I was very confident the release process would proceed cleanly. I didn’t feel I needed to inspect automatically generated release notes. I didn’t feel I needed to inspect if the binary sits happily in Bintray’s jCenter. I didn’t feel I needed to do anything else but pushing code to the master. Quality code, of course, test-driven, designed and documented properly. I wish everybody took advantage of modern approach of continuous delivery. Being able to focus on quality code and avoid thinking about releases as they happen automatically, all the time, in the background.

I’m making progress on extracting the bits and pieces of the release automation out of Mockito so that the machinery can be reused in other projects. If you want to take a look how it is done you can fork Mockito on GitHub. Down the road I’m thinking about building a release toolkit that would have an opinionated framework sitting on top of it. The framework would most likely contain Gradle plugins. In the simplest case, it would be enough to apply the gradle plugin and augment it with a few lines of configuration in the build.gradle file. Then, the project fully embraces continuous delivery, engineers enter the new level of joy from developing code whereas the consumers smile more often as they get features frequently, in small increments that contain less bugs. That’s my dream :)

Tuesday, November 25, 2014

Continuous delivery of evolving API

Here’s what I believe high quality API means:

  • evolves over time as growing adoption triggers appearance of new use cases, new creative and unanticipated usages of the API
  • useful for the end users, solves real world use cases in an elegant way
  • backwards compatible, safe-to-upgrade, follows best of semantic versioning

Now the challenge is to agree above criteria with frequent releases, or even better: with continuously deployed releases.

Longer release cadence is somewhat tempting for API development: there’s plenty of time to get the model and the interfaces right. Getting the API right is the key to generate happy users and to avoid the need for breaking changes. However, slow release cadence introduces many problems including those that I dread the most:

  • release contains many changes and therefore it is riskier and have higher chance of introducing bugs. As long as there are changes in the source code and there are people using the software, there are bugs
  • new features reach the users late sometimes causing frustration and damaging adoption rate

I don’t want to release rarely. I’ve been there with Mockito and it was dark and gloomy.

Say you start developing a brand new library or a tool. In order to release features often and still maintain the freedom to change the API easily you might be tempted to start versioning from 0.0.1. Semantic versioning lets you get away with breaking changes before the official 1.0. The users might not be that forgiving so often pre-1.0 software authors care for backwards compatibility. This typically indicates that the software should be 1.0 already.

0.x.y versioning scheme works great to discourage potential users and can tamper adoption. There is something magical about 1.0. Regardless of the software quality and usefulness, if it’s not 1.0, it will be regarded by some groups as unstable and "not ready". I’m not against 0.x versions - I’m merely describing my observations of the software industry. The starting version of Mockito back in 2007/2008 was 0.9 and it reached 1.0 in few weeks once it was used in production. If you want to start versioning with 0.x just keep in mind the semver docs (as of late 2014):
How do I know when to release 1.0.0? 
If your software is being used in production, it should probably already be 1.0.0. If you have a stable API on which users have come to depend, you should be 1.0.0. If you're worrying a lot about backwards compatibility, you should probably already be 1.0.0.
Let’s get back to the original criteria of beautiful and useful API combined with continuous delivery. It’s hard. Even if the authors spend good amount of time and resources on designing the API they can miss out some use cases. The users will experiment, hack, break and push the API to the limits. Sooner or later there will be legitimate scenarios that call for changes in the API. Considering consistency and clarity of the API it may not always be possible to evolve the API in backwards compatible way. It’s time to introduce best friends of great API and frequent releases: @Deprecated, @Incubating and @Beta.

Gradle (and Mockito) has @Incubating. Guava (and Spock) uses @Beta. At the moment, the documentation pretty neatly describes what those annotations are used for but I want to emphasize the “why”. It’s all about the capability to release frequently that ultimately leads to better product and happy users. Early access to new features is fundamental for a high quality product. Frequent, small releases are critical to risk mitigation. API authors cannot afford to design the API for too long to ensure its stability and usefulness. API authors need real life feedback and concrete use cases. If the tool that you use has @Incubating or @Beta features it is a very good sign! It means that the authors care great deal for backwards compatibility and the API design (and they want to provide you with new features on frequent, regular basis).

Every incubating feature of Gradle or Mockito, or a beta feature from Guava and spock received exactly same amount of love from the authors as the public features. It passed exactly the same design process, brainstorming and validation. It has tons of automated tests. It was test-driven. It is a high quality feature that calls for your feedback.

Evolution of the API based on incubation and deprecation of the features, high regard for backwards compatibility, continuous feedback loop between authors and the users is what we need for successful continuous delivery of the APIs. For a tool author like myself, it's great fun to work this way :)