Posts de ‘Alexandre Martins’

[Alexandre Martins] Continuous Delivery: Deployment Smoke Tests

Thursday, June 5th, 2014

I’ve always been a huge fan and advocate of using tests for developing applications. For me, working on a software without a decent suite of test is like walking on eggshells, each modification brings out the risk of breaking something on the system. To mitigate this risk I always make sure I have a minimum set of unit, integration and acceptance tests covering my application.

But does all that gives us the confidence that the system will work perfectly when it’s deployed to any of the environments, on its way through the release pipeline? I thought so until work with this guy and read this book. Tom Czarniecki firstly introduced me the concept os smoke tests, then reading Jez Humble and David Farley’s Continuous Delivery I could grok the real values of using it in conjunction with a build pipeline.

What are smoke tests?

As aforementioned, deployment smoke tests are quite handy because they give you the confidence that your application is actually running after being deployed. It uses automated scripts to launch the application and check that the main pages are coming up with the expected contents, and also check that any services your application depends on— like database, message bus, third-party systems, etc —are up and running. Alternatively you can reuse some acceptance or integration tests as smoke ones, given that they are testing critical parts of the system. The name smoke test is because it checks each of the components in isolation, and see if it emits smoke, as did with electronic circuits.

Provide clear failure diagnostics

If something goes wrong, then your smoke tests should give you some basic diagnostics explaining the reasons why your application is not working properly. In our current project at Globo.com, we are progressing towards start using Cucumber to write our smoke tests, thus having a set of meaningful and executable scripts, like this one below.

Feature: database configure
  System should connect to the database
  Scenario: should connect to the database
    When I connect to "my_app" database as root
    Then it should contain tables "users, products"

For those who like using Nagios for monitoring infrastructure, Lindsay Holmwood wrote a program called cucumber-nagios which allows you to write Cucumber tests that output the format expected of Nagios plugins, so that you can write BDD-style tests in cucumber and monitor the results in Nagios.

Knowing quickly whether you are ready or not!

Clearly rapid feedback and safety are the two major benefits of introducing smoke tests as part of a release process.

Rapid feedback

In our project, we implemented a deployment pipeline, so each new commit into the source repository is a potential deployable version to any environment, even to production. So we have the commit-stage where we run all the quick tests, and as soon as all of them passes, the acceptance-test-stage is automatically triggered, and the longer tests— integration and acceptance —are run, and once they’ve also passed, the application is automatically deployed into the dev environment. Getting a green at this stage means that it’s successfully deployed and smoke tested. But there still some exploratory testing to be performed before releasing this version into the staging environment. And in our team, this is done by the product owner, together with a developer. So as soon as they are ready to sign the story off, all they have to do is click the manual button which in turn deploy the application into the qa1 (UAT) environment, and if it’s green they can proceed, otherwise they pull the cord because something is malfunctioning, as you can see on the picture.

Don’t let the application deceive you

It’s quite frustrating, when all you need is the system to work as expected, because you are about to showcase it to your customers, and the first thing you click, all you see is a big and ugly error screen, instead of the page they were expecting. And later on you find out that it was due to database breakdown. What an embarrassing situation that could have been avoided by simply checking the smoke test diagnostics before showcasing.

[Alexandre Martins] Deployment Smoke Tests: Is Anyone Being Slack?

Thursday, August 12th, 2010

I’ve always been a huge fan and advocate of using tests for developing applications. For me, working on a software without a decent suite of test is like walking on eggshells, each modification brings out the risk of breaking something on the system. To mitigate this risk I always make sure I have a minimum set of unit, integration and acceptance tests covering my application.

But does all that gives us the confidence that the system will work perfectly when it’s deployed to any of the environments, on its way through the release pipeline? I thought so until work with this guy and read this book. Tom Czarniecki firstly introduced me the concept os smoke tests, then reading Jez Humble and David Farley’s Continuous Delivery I could grok the real values of using it in conjunction with a build pipeline.

What are smoke tests?

As aforementioned, deployment smoke tests are quite handy because they give you the confidence that your application is actually running after being deployed. It uses automated scripts to launch the application and check that the main pages are coming up with the expected contents, and also check that any services your application depends on— like database, message bus, third-party systems, etc —are up and running. Alternatively you can reuse some acceptance or integration tests as smoke ones, given that they are testing critical parts of the system. The name smoke test is because it checks each of the components in isolation, and see if it emits smoke, as did with electronic circuits.

smoke_tests.png

Provide clear failure diagnostics

If something goes wrong, then your smoke tests should give you some basic diagnostics explaining the reasons why your application is not working properly. In our current project at Globo.com, we are progressing towards start using Cucumber to write our smoke tests, thus having a set of meaningful and executable scripts, like this one below.

Feature: database configure
  System should connect to the database
  Scenario: should connect to the database
    When I connect to "my_app" database as root
    Then it should contain tables "users, products"

For those who like using Nagios for monitoring infrastructure, Lindsay Holmwood wrote a program called cucumber-nagios which allows you to write Cucumber tests that output the format expected of Nagios plugins, so that you can write BDD-style tests in cucumber and monitor the results in Nagios.

Knowing quickly whether you are ready or not!

Clearly rapid feedback and safety are the two major benefits of introducing smoke tests as part of a release process.

Rapid feedback

In our project, we implemented a deployment pipeline, so each new commit into the source repository is a potential deployable version to any environment, even to production. So we have the commit-stage where we run all the quick tests, and as soon as all of them passes, the acceptance-test-stage is automatically triggered, and the longer tests— integration and acceptance —are run, and once they’ve also passed, the application is automatically deployed into the dev environment. Getting a green at this stage means that it’s successfully deployed and smoke tested. But there still some exploratory testing to be performed before releasing this version into the staging environment. And in our team, this is done by the product owner, together with a developer. So as soon as they are ready to sign the story off, all they have to do is click the manual button which in turn deploy the application into the qa1 (UAT) environment, and if it’s green they can proceed, otherwise they pull the cord because something is malfunctioning, as you can see on the picture.

Screen shot 2010-08-11 at 4.13.01 PM.png

Don’t let the application deceive you

It’s quite frustrating, when all you need is the system to work as expected, because you are about to showcase it to your customers, and the first thing you click, all you see is a big and ugly error screen, instead of the page they were expecting. And later on you find out that it was due to database breakdown. What an embarrassing situation that could have been avoided by simply checking the smoke test diagnostics before showcasing.

[Alexandre Martins] TDD: Listen to the tests… they tell smells in your code!

Wednesday, July 28th, 2010

These days, reading the Goos book, by Steve Freeman and Nat Pryce, it reminded me of a project I worked on a while ago. It was a one year old system, poorly tested, integrating to a handful of other systems, and the code-base… well I prefer not to remember. Despite this scenario, I joined the team to help them implement some new functionalities.

I remember sometimes it was difficult to write tests, the classes were tightly coupled, with no clear responsibilities, several attributes, bloated constructors, etc. And despite our best effort, working around the bits that were preventing us from writing the tests, we felt we were getting down the wrong road, trying to do it in such a crappy code-base. As a result some of our tests were massive! A bunch of lines of mocks, stubs, and expectations, making it impossible to understand their purpose.

What have I learned?

Reading one of the book chapters I learned that the same qualities that makes an object easy to test also makes the code responsive to change. In my situation, the tests were telling me how clumsy the code was and how difficult it would be to extend it.

I also learned that when we come across a functionality that is difficult to test, asking ourselves how to test it is not enough, we also have to ask why is it difficult to test, and check whether it’s an opportunity to improve our code. The trick is to do it driven by tests, so we can get rapid feedback on code’s internal qualities and on whether it’s doing what it’s supposed to do.

So they introduced a variation for the well-known TDD cycle— Write a failing test” => “Make the test pass” => “Refactor. As described on the figure below (extracted from the book), if we’re finding it hard to write the next failing test for our application, we should look again at the design of the production code and often refactor it before moving on, until we get to the point that we can write tests that reads well.

Extracted from Growing Object-Oriented Software, Guided By Tests— Steve Freeman and Nat Pryce

An Example of a Smell Tests Might Be Telling You

Reference data rather than behavior

When applying “Tell Don’t Ask” or “Law of Demeter” consistently, we end up with a coding style where we tend to pass behavior into the system instead of pulling values up through the stack. So picking up the famous Paperboy example, before refactoring the code applying the “Law of Demeter” the code and test would look something along the lines of the snippet showed below.

class Paperboy
  def collect_money(customer, due_amount)
    if due_amount > customer.wallet.cash
      raise InsuficientFundsError
    else
      customer.wallet.cash -= due_amount
      @total_collected += due_amount
    end
  end
end
it "should collect money from customer" do
  customer = Customer.new :wallet => Wallet.new(:amount => 200)
  paperboy = Paperboy.new
  paperboy.total_collected.should == 0
  paperboy.collect_money(customer, 50)
  customer.wallet.cash.should == 150
  paperboy.total_collected.should == 50
end

We can easily see that the test is telling us it knows too much detail about Customer class implementation. We can see its internals, which objects it’s related to, and even worse, we’re also exposing implementation details of its peers. So it’s clear for me that it needs some design improvement. My main goal here is to hide Customer implementation details from the users of the Paperboy class. Which means that I don’t wanna see anything but Customer and Money classes referenced on the test!

class Paperboy
  def collect_money(customer, due_amount)
    @collected_amount += customer.pay(due_amount)
  end
end
it "should collect money from customer" do
  customer = Customer.new :total_cash => 200
  paperboy = Paperboy.new
  paperboy.total_collected.should == 0
  paperboy.collect_money(customer, 50)
  customer.total_cash.should == 150
  paperboy.total_collected.should == 50
end

The method customer.pay(due_amount) wraps all the implementation detail up behind a single call. The client of paperboy no longer needs to know anything about the types in the chain. We’ve reduced the risk that a design change might cause ripples in remote parts of the codebase.

As well as hiding information, there’s a more subtle benefit from “Tell, Don’t Ask.” It forces us to make explicit and so name the interactions between objects, rather than leaving them implicit in the chain of getters. The shorter version above is much clearer about what it’s for, not just how it happens to be implemented.

All the logic necessary to collect the money is inside the Customer object, so it doesn’t have to expose its state to its peers.

Now it’s safer to continue writing new failing tests to our objects.
Remember, listen to the tests!

[Alexandre Martins] Ping Pong Pairing: Even More Fun!

Thursday, March 18th, 2010

The agile software development practice I like the most, and at the same time, the one I find the most difficult is pair programming. Each individual has his/her own way of working, and characteristics such as motivation, engagement, habits, open-mindedness, and coding/design style varies a lot from individuals. Sometimes, to get a balance between these differences is quite hard. I am still not an expert in pair programming coaching, but I’ve been learning a lot on my current assignment.

And from this experience, it seems that clients are definitely more involved and amused when it comes pairing following the ping pong pattern.

Ping Pong Pattern

It happens when the developer 1 from a pair implements a test for a given feature and see it failing, then passes the keyboard to developer 2 who makes the test pass, do some refactoring on the code and implements another test, passing the keyboard back to developer 1 to do the same thing and continue until the feature is done.

Why Do We Like

  • Challenge - Each time a developer writes a test for you to make it pass, it sounds like a challenge, then you do it and write another one, challenging him back.
  • Dynamics - The worse thing is a developer that just hogs the keyboard, making you feel a useless. Ping pong pairing makes you swap keyboard more frequently.
  • Engagement - Developers are much more engaged because they are constantly coding, not only observing.
  • Fun - It is so much fun when you have all the above items together!

[Alexandre Martins] Retrospectives: Analogy For Developers

Thursday, March 18th, 2010

One day, while reading Esther Derby’s book, preparing for a retrospective session, I came across a great analogy between retrospective and development life-cycle:

While continuous builds, automated unit tests, and frequent demonstrations of working code are all ways to focus attention on the product and allow the team to make adjustments, retrospectives focus attention on how the team does their work and interacts.

Indeed it helps people improve practices and focus on teamwork. That’s why it is one of my favorite meetings.

[Alexandre Martins] Clojure: Integrating With Java

Thursday, March 18th, 2010

Currently I am learning Clojure. It is a functional programming language, but not a pure one, since you can both write code that share state (mutable) and also ones that doesn’t.

Why Clojure?

The main reason why I chose Clojure is its easy interoperability with Java, still one of the most used languages, bringing to it the power of Lisp. It’s fast, since the code is compiled, and it supplements some of Java’s weakness, such as the Collections framework and concurrent programming. It is pretty straightforward to write concurrent programs, everything is automatic, no manual lock management!

Integrating With Java

Importing classes

A single class:

(import java.util.List)

Multiple classes from the same package:

(import '(java.util List Set))

Creating instances

Using Java’s new keyword:

(new java.util.ArrayList)
(new ArrayList) ; after importing

Assigning a new List to a Clojure variable:

(def list (new java.util.List))
-> #'user/list

Syntactic Sugar:

(ArrayList.)

Accessing fields

Static fields:

(. Math PI)

Syntactic Sugar:

Math/PI

Invoking methods

Static Methods

(.currentTimeMillis System)

Syntactic Sugar:

(System/currentTimeMillis)

Non-static Methods

(. list size)
(. list get 0) ; returns the object stored at index 0

Syntactic Sugar:

(.size list)

Mixing Them All

Clojure provides a macro called memfn that makes possible execute Java methods as functions. So, for a list of String objects, if I want to make all of them upper-case, all I have to do is:

(map (memfn toUpperCase) ["a" "short" "message"])

The map function applies the function/method toUpperCase to each element in ["a" "short" "message"]

You can also use the bean function to wrap a Java bean in an immutable Clojure map.

(bean (new Person "Alexandre" "Martins"))
-> {:firstName "Alexandre", :lastName "Martins"}

Once converted, you can manipulate the new map using any of Clojure’s map functions, like:

(:firstName (bean (new Person "Alexandre" "Martins")))
-> Alexandre

The code above extracts the firstName key, originally from the Person object.

[Alexandre Martins] 2008 Retrospective

Thursday, March 18th, 2010

After reading posts from some friends I decided to write my 2008 retrospective, so there it go!

Personal Life

  • First year married :)
  • Moved to the land down under.
  • Tried to go to the gym, but it still seems like I am better as an investor :)

Professional Life

  • Joined ThoughtWorks Australia.
  • Tried to post more often on my blog.
  • Became a Certified Scrum Master Of The Universe!
  • Projects: 4
  • Conferences attended: JAOO Sydney

Learning

2009 Resolutions

Still haven’t planned properly what to do for 2009, the only thing for now is continuing learning Clojure, and understand more about applying Lean principles into software development.

[Alexandre Martins] Hamcrest Out Of Test Code!

Thursday, March 18th, 2010

It’s been a while since I read some interesting posts showing creative uses of Hamcrest library out of test code. Since then I’ve been proscrastinating to implement my own version, trying strongly typed java delegates.

Thankfully this week I came across a nice API called hamcrest-collections. It uses Hamcrest to implement features such as select, reject, map, reduce and zip familiar from languages like Ruby and Python.

Selectors

Selectors can be used to select or reject items that matches a given Matcher, from any iterable object. It reminds me the Specification Pattern from Domain-Driven Design, which is also used for querying objects that satisfies defined specifications.

public static final Person john = new Person("John", 28);
public static final Person nicole = new Person("Nicole", 12);
public static final Person ryan = new Person("Ryan", 23);
public static final Person nathan = new Person("Nathan", 18);

public static final List list() {
    return Arrays.asList(john, nicole, ryan, nathan);
}



The code below selects from the list of users defined above, the ones that are under twenty.

@Test
public void should_select_only_people_under_twenty_years_old() {
    List users = Person.list();
    Iterable underTwentyList = select(users, underAge(20));
    assertThat(underTwentyList, hasItems(nicole, nathan));
    assertThat(underTwentyList, not(hasItems(john, ryan)));
}



The code below rejects all the users that are under twenty.

@Test
public void should_reject_every_people_under_twenty_years_old() {
    List users = Person.list();
    Iterable aboveTwentyList = reject(users, underAge(20));
    assertThat(aboveTwentyList, hasItems(john, ryan));
    assertThat(aboveTwentyList, not(hasItems(nicole, nathan)));
}


Map and Reduce

Map is used to apply a function onto each item in any iterable object, whereas Reduce combines all these elements, applying a Reducer implementation. In our example, we map the timesTwo function, that doubles each element in the list, and then we reduce it by adding up all of them.

@Test
public void should_double_each_number_in_the_list_then_sum_all_of_them() {
    List numbers = Arrays.asList(1, 2, 3);
    MultiplyBy timesTwo = new MultiplyBy(2);

    Iterable result = map(numbers, timesTwo);
    assertThat(result, hasItems(2, 4, 6));

    Integer sum = reduce(result, new Sum());
    assertThat(sum, equalTo(12));
}


public class MultiplyBy implements Function {
    private Integer factor;

    public MultiplyBy(Integer factor) {
        this.factor = factor;
    }

    public Integer apply(Integer number) {
        return (int)number * factor;
    }
}


public class Sum implements Reducer {
    public Integer apply(Integer first, Integer second) {
        return first + second;
    }
}


Despite the bias created by some developers, that Hamcrest should not be used anywhere else but test code, specially after JUnit has defined it as its new matcher library, just ignore it and add these features to your runtime library, so that you can let your creativity drive you when developing. Get rid of “for” loops from your life! :)

[Alexandre Martins] Lean: Go-Kart Exercise

Thursday, March 18th, 2010

Last week I attended the Lean Thinking And Practices For IT Leaders workshop organised by ThoughtWorks. There we had the presence of Mary and Tom Poppendieck, my colleague Jason Yip and two consultants from KM&T. One of the things that I really liked about it was that it wasn’t only driven by presentations, but also by a lot of practical exercises, so we could get a better feeling of the benefits of applying these thinking and practices. One of the exercises we did was the Go-Kart game.

How it works?

Two teams are created (alpha and beta), and each one has to split up into five groups with the given responsibilities: disassembly, transportation, assembly, observation and time-keeping. They are given the task to completely disassemble, transport and re-assemble a Go-Kart as quick as possible, in a safe manner, while the observer write notes about problem points. The whole process is done twice, so that you can run it once, analyse the process used, based on feedback provided by the observer, and think of ways to improve it, before running the second time.

First Attempt

In our first attempt, all we knew was that we had to split the team into five groups. We had no idea of the necessity of a detailed process, but doing all the phases as fast as possible. Vikky, our team leader, proposed the creation of a manual with the detailed steps needed to assemble the kart, to be used by the assembly team. And that’s what we did!

Our marks

Planing time: 10 minutes
Disassembling time: 5 minutes
Assembling time: 12 minutes
Total time: 14 minutes 20 seconds
Quality of delivered product: OK

Problem Points (Gathered by observers)

  • The team took seven minutes to get organised and start doing something.
  • No leadership nomination. Vikky, one of the team members, had to auto-niminate herself as the team leader.
  • Disassembly group didn’t notice differences on the washers and on the bolts, causing uncertainty and waste of time in the assembly group.
  • Bottleneck on the transportation of the parts from one station to another. No one from disassembly group to pick up the parts, making the transporter keep holding them, stopping the process flow.
  • The components needed to assemble specific parts of the car were not delivered together, making the assembly group wait for the remaining ones.
  • Some members in the assembly group were in a rush to finish fast and ignored the manual, resulting in some mistakes.

Second Attempt

Before starting the second attempt we got together to discuss the problem points, coming up with some ideas of improvements. Here they are:

Improvements

  • We nominated people on both disassembly and assembly groups to be in charge of handing and picking up parts from the transporter.
  • We decided to hand the parts related to each other in chunks, so that they could be assembled straight away, eliminating the time wasted waiting for remaining parts.
  • We nominated specialists for roles such as assembling the wheels, etc.
  • We added one more member to the transportation group, to get rid of the bottleneck.

Instead of spending a long time planning, we did it the agile way, highlighting only things we knew at the time, very quickly, and running through, spiking and checking if we were actually carrying out with the improvements, before doing the official attempt. We found some problems, adjusted to them and immediately got organised for the second attempt.

Our marks

Planing time: 10 minutes
Disassembling time: 1 minute 50 seconds
Assembling time: 2 minutes 33 seconds
Total time: 3 minutes 45 seconds
Quality of delivered product: OK

Click here to see some photos of our team during the exercise.

Conclusion

Lean advocates that you should pursue perfection when improving your process - aiming to reduce effort, time, space, cost and mistakes - and I learnt that this applies to any organisation, of any size. Thus, from the process used on this game, collaboration, self-organisation, rapid feedback contributed a lot to our improvement, helping us to eliminate waste.

So, what could you do for your organisation?

Take a step back, take a look at the big picture of how things work in your company and ask yourself questions such as: How do we deliver? Does it takes longer to test and deploy our system than to develop it? Who do we depend on to put the system onto production? What is causing a bottleneck? What could I do to change this scenario? Answer these questions (or others you make up) and think of improvements.

[Alexandre Martins] ThoughtWorks Australia is Hiring!

Thursday, March 18th, 2010

ThoughtWorks Australia is looking for new talents!

This time we are hiring Senior QA Testing Consultants!
So if you want to work in this fast growing, unhierarchical consultancy, applying your knowledge of testing in a variety of client environments while constantly using the latest methodologies and technologies, you can continue reading this post, otherwise, just don’t bother :)

Working with us, you’ll get to work alongside truly talented teams and help them enhance their performance by bringing quality assurance to the forefront of clients’ minds. As well as ensuring the bug-free delivery of custom built software, you will also be working with clients to advise them on improving their test processes and teaching them about the very latest from the QA world.

Some Of The Duties

Our test processes are very different to many organisations. Testers are involved from the initial requirements gathering through implementation to deployment. They are always around to ask the awkward questions and try scenarios that analysts or developers are unlikely to dream up. They are involved when analysts are capturing requirements in the form of user stories. These stories are then converted into acceptance tests outlining specific scenarios. Testers play a big part in making sure those tests are well defined and complete so that developers know when they have finished implementing the functionality defined in a story. For more information, visit http://testing.thoughtworks.com.

Desired Experience

  • Be a very hands-on tester who is comfortable across a whole range of functional testing including UAT, acceptance and system testing with tools like Fit, Fitnesse, Silk, Winrunner or any other automation tool
  • Experience of participating in full life cycle development right from the requirements gathering and analysis phase
  • Have worked on large, long term projects (more than 10 people, longer than 6 months)
  • Enjoyment of working closely with developers, analysts and clients in a highly collaborative environment
  • Exceptional communication skills
  • An unrivalled passion for delivery

Also Highly Desirable

  • Experience of creating test frameworks and strategy, choosing automated testing tools and creating testing standards
  • Experience of, or interest in working with Open Source testing tools like Selenium and Sahi
  • A knowledge of testing within an Agile development environment
  • A background in OO development
  • A track-record of innovation in testing
  • Experience of working in an onsite, consultancy environment

So if you are interested, then click here to apply online. And just a quick reminder that ThoughtWorks offers Visa Sponsorship for candidates.