Posts de August, 2010

[Bruno Mentges de Carvalho] Desabilitar confirmação ao rodar um aplicativo baixado da internet no MacOSX

Tuesday, August 17th, 2010

Agora sou um usuário Mac e uma coisa vem me incomodando desde o primeiro dia: A confirmação de segurança de que quer rodar um aplicativo baixado da internet toda vez que você o abre.

Quando esse aplicativo é o firefox e você o utiliza todo dia isso realmente incomoda. Então achei a seguinte solução:

xattr -d -r com.apple.quarantine /Applications

Você pode especificar qualquer diretório onde tenha esses aplicativos, como por exemplo o ~/Downloads e ele vai parar de reclamar para os aplicativos que já existem.

[Alexandre Martins] Deployment Smoke Tests: Is Anyone Being Slack?

Thursday, August 12th, 2010

I’ve always been a huge fan and advocate of using tests for developing applications. For me, working on a software without a decent suite of test is like walking on eggshells, each modification brings out the risk of breaking something on the system. To mitigate this risk I always make sure I have a minimum set of unit, integration and acceptance tests covering my application.

But does all that gives us the confidence that the system will work perfectly when it’s deployed to any of the environments, on its way through the release pipeline? I thought so until work with this guy and read this book. Tom Czarniecki firstly introduced me the concept os smoke tests, then reading Jez Humble and David Farley’s Continuous Delivery I could grok the real values of using it in conjunction with a build pipeline.

What are smoke tests?

As aforementioned, deployment smoke tests are quite handy because they give you the confidence that your application is actually running after being deployed. It uses automated scripts to launch the application and check that the main pages are coming up with the expected contents, and also check that any services your application depends on— like database, message bus, third-party systems, etc —are up and running. Alternatively you can reuse some acceptance or integration tests as smoke ones, given that they are testing critical parts of the system. The name smoke test is because it checks each of the components in isolation, and see if it emits smoke, as did with electronic circuits.

smoke_tests.png

Provide clear failure diagnostics

If something goes wrong, then your smoke tests should give you some basic diagnostics explaining the reasons why your application is not working properly. In our current project at Globo.com, we are progressing towards start using Cucumber to write our smoke tests, thus having a set of meaningful and executable scripts, like this one below.

Feature: database configure
  System should connect to the database
  Scenario: should connect to the database
    When I connect to "my_app" database as root
    Then it should contain tables "users, products"

For those who like using Nagios for monitoring infrastructure, Lindsay Holmwood wrote a program called cucumber-nagios which allows you to write Cucumber tests that output the format expected of Nagios plugins, so that you can write BDD-style tests in cucumber and monitor the results in Nagios.

Knowing quickly whether you are ready or not!

Clearly rapid feedback and safety are the two major benefits of introducing smoke tests as part of a release process.

Rapid feedback

In our project, we implemented a deployment pipeline, so each new commit into the source repository is a potential deployable version to any environment, even to production. So we have the commit-stage where we run all the quick tests, and as soon as all of them passes, the acceptance-test-stage is automatically triggered, and the longer tests— integration and acceptance —are run, and once they’ve also passed, the application is automatically deployed into the dev environment. Getting a green at this stage means that it’s successfully deployed and smoke tested. But there still some exploratory testing to be performed before releasing this version into the staging environment. And in our team, this is done by the product owner, together with a developer. So as soon as they are ready to sign the story off, all they have to do is click the manual button which in turn deploy the application into the qa1 (UAT) environment, and if it’s green they can proceed, otherwise they pull the cord because something is malfunctioning, as you can see on the picture.

Screen shot 2010-08-11 at 4.13.01 PM.png

Don’t let the application deceive you

It’s quite frustrating, when all you need is the system to work as expected, because you are about to showcase it to your customers, and the first thing you click, all you see is a big and ugly error screen, instead of the page they were expecting. And later on you find out that it was due to database breakdown. What an embarrassing situation that could have been avoided by simply checking the smoke test diagnostics before showcasing.

[Tiago Motta] Aumento de produtividade por ponto de complexidade

Monday, August 9th, 2010

Tenho ouvido ultimamente muitos amigos da área comentando sobre pressão por aumento de produtividade baseada em pontos de complexidade. Isso me deixa bastante preocupado. Embora seja nobre o desejo de aumentar as entregas da área de desenvolvimento, quantificar isso usando os pontos de complexidade das histórias não quer dizer muita coisa.

Contudo antes de simplesmente reclamar contra a pressão, é preciso analisar as possiveis causas de produtividade baixa que possam estimular esse tipo de pressão. Pensando bastante cheguei a três possibilidades, e com elas, possiveis soluções, que seriam mais eficazes do que estimular aumento desse tipo de numero.

1- Não há confiança de que o time esteja trabalhando em seu máximo. Ou seja, o agente gerador de pressão acredita que os integrantes estão fazendo corpo mole ou gastando o dia com amenidades ao invés de se focar na entrega do projeto. A pressão por aumento de pontos de complexidade pode até resolver esse problema temporariamente, mas também pode ser mascarado por integrantes do time que se sentem coagidos a fazer horas extras para cobrir as entregas esperadas. Ou seja, o problema só se resolve mesmo com conversas francas e com uma presença mais ativa do interessado.

2- O time percebendo folga na iteração aproveita para melhorar a qualidade da entrega ainda mais. Esse tipo de preciosismo costuma acontecer com bastante frequência. Muitas vezes o desenvolvedor por ter mais tempo para pensar aproveita para implementar testes e fluxos mais rebuscados evitando bugs que no futuro tomariam o triplo do tempo, e o designer aproveita para criar e rebuscar as interfaces e assim encantar ainda mais o cliente. Neste caso a pressão pelo aumento de entrega de pontos de complexidade apenas estimula a diminuição da qualidade. Ou seja, embora haja um aumento imediato na velocidade, em pouco tempo ela cairá por causa das correções de bugs e dos ajustes visuais.

3- O time é inexperiente ou não conhece a tecnologia adotada. Neste caso pressionar pelo aumento de entrega de pontos de complexidade de nada adianta. De certa forma, com o tempo, naturalmente as entregas serão maiores, ou em muitos casos menores pois o time começará a estimar com menos pontos, demonstrando assim a ineficácia desse numeros para medir produtividade. Para resolver este problema existem diversas opções como, organizar dojos, estimular programação em par, indicar livros e treinamentos, estimular a experimentação com tempo para projetos pessoais.

Todas três possibilidades acima eu vi acontecer de perto nos times em que trabalhei. E quase sempre o problema foi resolvido sem utilizar os pontos de complexidade como parâmetro de medição. E vocês lembram de alguma outra causa de produtividade baixa? Tem idéia de como solucionar? Qual sua opinião sobre o assunto?

[Guilherme Garnier] Solução de problema com o Flash Player no Ubuntu com Firefox >= 3.6.4

Friday, August 6th, 2010

Desde a versão 3.6.4, o Firefox possui um recurso chamado Crash protection (somente para Windows e Linux). Agora o browser cria um processo à parte chamado plugin-container para execução dos plugins do Flash, QuickTime e Silverlight. O objetivo é impedir que um erro na execução de um destes plugins trave o browser – caso isto ocorra, somente o plugin será interrompido.

Depois que atualizei o Firefox para esta versão – na verdade, atualizei diretamente do 3.6.3 para o 3.6.6 – no Ubuntu 10.04, o Flash simplesmente parou de funcionar, exibindo a mensagem “The Adobe Flash plugin has crashed”. Tentei reinstalar o Flash diversas vezes, tanto pelos pacotes adobe-flashplugin e flashplugin-installer usando o apt-get quando baixando um arquivo .deb diretamente. A página plugin check informava que o Flash estava instalado, porém com uma versão desatualizada (9.x). Também tentei reinstalar o Firefox e nada.

Depois de alguns dias de tentativas frustradas, finalmente tive a ideia de testar com outro browser. Pelo Chrome o Flash funcionava perfeitamente, ou seja, o problema estava diretamente relacionado com o Firefox.

Após mais algumas pesquisas, encontrei o artigo Plugin-container and out-of-process plugins. Descobri que há um parâmetro na configuração do Firefox (digite about:config na barra de endereços para acessá-la) chamado dom.ipc.plugins.enabled que permite habilitar ou desabilitar o crash protection para plugins de terceiros. Este parâmetro serve para qualquer plugin não especificado, e o valor padrão é false. Há um parâmetro específico para o plugin do Flash: no Linux é o dom.ipc.plugins.enabled.libflashplayer.so, e no Windows é dom.ipc.plugins.enabled.npswf32.dll. Este parâmetro tem o valor padrão true; depois que mudei para false, o Flash passou a funcionar na maioria dos sites; porém, os vídeos da Globo.com, por exemplo, continuaram não funcionando, mas passaram a exibir uma mensagem dizendo que o Flash estava desatualizado.

Para descobrir mais informações sobre os plugins, digitei about:plugins na barra de endereços. A página que apareceu mostrou duas versões de Flash instaladas: uma era a mais recente (10.x) e a outra estava desatualizada (9.x). Porém, esta tela não mostrava a localização de cada plugin. Para descobrir o path completo para cada plugin, voltei para a tela de configuração (about:config) e alterei o valor do parâmetro plugin.expose_full_path para true. Agora, a tela do about:plugins passa a exibir o path de cada plugin instalado.

Desta forma, descobri que havia uma versão mais antiga do Flash instalada no meu home (em /home/guilherme/.mozilla/plugins/libflashplayer.so). Não sei a ordem em que o Firefox procura os plugins, mas aparentemente este estava sendo utilizado em vez do mais atual, que fica em /usr/lib/flashplugin-installer/libflashplayer.so. Removi a versão que estava no home e o Flash voltou a funcionar perfeitamente, inclusive depois de reativar o crash protection.

Outras referências úteis não citadas:

Posts relacionados:



[Bernardo Heynemann] Dream Team – Part IV – Problem Solving

Wednesday, August 4th, 2010

Introduction

So far our team has decided on a set of values and we saw how they discussed the first one.

Now we join them as they get back from Starbucks to discuss again their values.

Problem Solving

The meeting starts again and once more I’m in charge of keeping time. John starts:

http://www.flickr.com/photos/ell-r-brown

Credits to Ell Brown

John – Hi guys! Welcome back! That was a nice coffee break. There’s something I’ve been thinking about and I’d really like to talk with you guys about.

I would like to discuss how each of you feel about problem solving. If we are to keep continually improving we need a systematic way of solving problems. At least that’s how I feel.

Jake - What do you mean by systematic?

John - I mean we need some way to archive the knowledge we gathered and the solution we had for each problem. This way we don’t need to repeat ourselves when solving the same problem, or even when some other team in the company wants to solve it. I read about how Lean companies do it with A3 report cards and I really liked what I read.

Susan - Even though I never worked with a formal way of solving problems, I often thought that we should have a way of publishing our findings when we decide something.

Christian - Can you give an example?

Susan - Sure. In my last project we were using Python as our main language. Some team members had experience with Django, while others preferred to use a more low-level framework like CherryPy. Preferences aside, we were expected to provide real-time dynamic data to more than a hundred thousand concurrent users, so performance was a key constraint for us.

Jake - Wasn’t there a benchmark you could use to decide?

Susan - We didn’t find one, so we ended up doing a thorough comparison of some Python Web Frameworks with many different production settings (Apache, Nginx, you name it). The results pointed us in the right direction.

The thing is that even though it was very cool to learn all that on a personal level, I feel that we didn’t leave behind any trail of this knowledge to other teams. Probably others have done this already, as was reported by our production team. If we don’t have a formal way of solving problems like this, we are bound to repeat this effort time and again.

John – I couldn’t have summarized it better. Thanks Susan for explaining what I meant. That’s exactly it. Knowledge generation is a problem that I really care about. What about you guys? What do you think?

http://www.flickr.com/photos/acidwashphotography

Credits to Dan.

Jane - We have that in the Design team. We always leave behind reasons and the discovery made to achieve a certain standard. I thought that you guys did that as well. I most certainly think we need to standardize how we decide things. This is key even within our team.

Imagine me and Christian finish some UI definition together and we want to share with you guys. I expect that you’ll want to know what were the assumptions and everything that led us to define the UI in the way it was defined, right? In order for that to happen there needs to be some mechanism to formalize knowledge. I really like this A3 thing, since I love sketching with pen and paper.

Christian - I don’t have much to add except that whenever I work in Open Source Projects, each project has its own way of preserving knowledge. Wikis, evolution proposals, docs, release notes, you name it. The key thing is that all successful projects share this trait in common. They all keep their knowledge at heart. I think we should do the same. I’d realy like to try this A3 technique, if we can change it later if we don’t like it.

Joseph – Hey, that’s brilliant. We agreed already on changing any process that does not work to something better, didn’t we?

All - YES!

John - Ok, we have a way to solve problems, but how are problems going to affect our work. I hear Susan has already worked here in the company using Stop the line methodology, right?

Susan – Yes, I did. And it was GREAT! The team used the motto: “The best time to solve a problem is RIGHT NOW!”. I learned later that this translates to the Jidoka principle in LEAN methodologies. It means that whenever we have a problem we stop working in whatever we are working to solve that problem. It seems counter-intuitive, since we’ll be “less” productive due to solving problems. The thing is even though this slows us down a little in the beginning, it speeds us up GREATLY in the long run.

Christian – What about bugs? Do we keep them in a bug tracker?

John - I reckon that if we keep solving bugs whenever we find them or whenever people report them, we should have 1 or 2 bugs open at any given time tops. Who needs a tracker to track 1 or 2 items?

Christian - Makes sense.

Joseph – So we as a team agree that whenever a problem arises the proper number of people in the team should stop as soon as possible to fix it and then find a way (at first an A3 report) to leave behind the knowledge on how it got fixed and why. Is that it?

All – Yep.

Joseph - Ok, I think we got our second value. The best time to solve any problem or defect is now.

Conclusion

Software Craftsmanship is a creative activity. As such we are confronted with problems and issues every day. It seems to be a good thing to just archive the issue in some way (bug tracker, tech debt card or any other way you can think of).

The problem with this is that the issues start building up and work starts to slow down. If you keep stopping the line whenever you find a problem, eventually the number of issues will decrease astoundingly. When the number of issues decrease, the speed of the team increases.

Refactoring code is one way to stop the line whenever you see code that is not clear. Fixing a bug you found while on another story is a way to stop the line. Acting on an integration problem with the customer is a way to stop the line. Introducing an improvement in the process is a way to stop the line. Anything that solves any problem RIGHT NOW is a way to stop the line.

There must be a way of publishing the results of problem solving in a way that people can benefit or refer to in the future. This leads to a better knowledge management and improved collaboration within the company. One suggested way of doing this is how scientists currently do: the scientific method. Since I already discussed it, I won’t say anything further.

If anyone has any suggestions on problem solving or stopping the line, please leave comments.

http://blog.heynemann.com.br/2010/07/20/problem-solving-scientific-method/

[Rafael Biriba] Crontab: rodando um script a cada 15 segundos

Sunday, August 1st, 2010

crontab
Criei esse POST com 2 objetivos: Para compartilhar a minha idéia e quem sabe também obter novas maneiras de resolver um pequeno problema.

Problema: Digamos que você tenha que acessar uma URL qualquer (exemplo: http://localhost:3000/coletar ) a cada 15 segundos. Isso rodará um script que irá colher uma série de dados e armazena-los em banco.

Então, se você rodar um comando como este: “curl http://localhost:3000/coletar” a cada 15 segundos, resolveria meu problema. Agora, como fazer isso usando crontab ?

Se você procurar a solução na internet, encontrará algumas boas idéias como fazer um script e com alguns sleeps e um loop interno você não precisa nem de cron:

#!/bin/sh
while [ 1 ]; do
    curl "http://localhost:3000/coletar"
    sleep 15
done

Basta rodar esse script, que com um loop eterno, vai rodar o comando de 15 em 15 segundos… Agora você deve estar pensando: “Problema resolvido!” …

Na verdade, não. Os dados precisam ser coletados necessariamente a cada 15 segundos. Isso significa que se por acaso, quando for feito um request, se o servidor demorar pra responder aquela solicitação, isso pode atrasar a próxima coleta, e depois de algum tempo, o atraso só tende a piorar.

Então qual a melhor solução para este problema ?
Como eu preciso coletar a cada 15 segundos sem que uma coleta atrapalhe a outra, acho que a melhor forma de fazer isso foi adicionar no crontab do usuário:

* * * * * curl 'http://localhost:3000/coletar'
* * * * * sleep 15 && curl 'http://localhost:3000/coletar'
* * * * * sleep 30 && curl 'http://localhost:3000/coletar'
* * * * * sleep 45 && curl 'http://localhost:3000/coletar'

Com isso, os comandos serão executados independentemente a cada 15 segundos…

Então é isso…

Se você conhece alguma forma mais eficiente para fazer isso… Não deixe de  compartilhar a sua idéia comigo ;)


Leia também: