Get an old version of a file on Git


When you need to get an old version of a file from git, follow the steps below:

Find the version you want:

git log -p [folder or file path]

It will return the list of changes (something like this)

commit c0924180dba243f12dcbd63c2eb52d7a7472ed5a
Author: Rodrigo De Presbiteris <>
Date: Fri Dec 8 14:03:55 2017 +0000

Some comments

diff --git a/file.txt b/file.txt
index 3c69a57..36f8855 100644
Binary files a/file.txt and b/file.txt differ

commit f367ba1747618abe502a61e317d87730e6bfbb04 (origin/branch_name)
Author: Rodrigo De Presbiteris <>
Date: Wed Dec 6 11:57:09 2017 +0000

Commit comments are shown here

diff --git a/file.txt b/file.txt
new file mode 100644
index 0000000..3c69a57

Now you have to copy the commit hash and run the command below:

 git show [commit_hash]:file.txt > ~/Desktop/file.txt

You will get the old version of the file generated in the destination you defined.


Merge Variables/Dictinaries in Ansible


If you are trying to merge variables in an ansible script and you do not have access to change the ansible configuration (or do not want to do so), follow these steps:

Imagine you have a regular include_vars task:

- include_vars:
  dir: "vars"

The above code adds every file in the vars folder to the facts collection. As an example, consider you have the following file var.yml.

  name: "Api Explorer"
  version: "1.0.0-SNAPSHOT"

Step 1 – Create a file inside the homolog folder.

  version: "1.0.1"

Step 2 – Create a task in your playbook that loads files and adds them in a separate context, in this case, specific_vars is filled with the contents of files in the homolog folder.

- include_vars:
  dir: "homolog"
  name: specific_vars

Step 3 – Create a task to combine the variables in facts with the variables in specific_vars.

- name: "Combine specific with default variables"
  set_fact: {"{{ item.key }}": "{{ vars[item.key] | combine(item.value) if item.value['keys'] is defined else item.value }}"}
  with_dict: "{{ specific_vars }}"

Now you can run your script and you will see that the value of site.version is 1.0.1 and the variable still exists.

ASP.NET – Serving IIS Express over LAN


I recently ran into a problem: There is no available server to homologate my feature.

I had to validate it with the marketing manager, so I have no place to do so.

After some research I found a “possible” solution that did not worked for me, it were related to changing the bindings info in some .config file. Definitely it were not the best solution.

Then I decided to appeal to nodeJS and with one single search with the key word “iis express node” I found this repo

I could not be simpler.

(You need to have the nodeJS and npm installed on your machine)

Follow these steps and start serving through your network:

Step 1:

npm install -g iisexpress-proxy

Step 2:

iisexpress-proxy localPort to proxyPort

Then I have declared my freedom from homologation server limitation.

Feel free to comment below if you find a better solution or any other alternative.


Generating series of data in PostgreSQL


Recently I had received an apparently simple task: “Count the number of returning users, per month age.”, and it should be presented like a cohort analysis.

It should be solved as a simple query and returning something like this:

year_month_first_buy | month_age | count
               200904 |         0 |    10
               200904 |         1 |    8
               200904 |         2 |    5
               200904 |         4 |    1

Then I realized that I have no returning users that bought 3 months after his first buy.

And of course, the better way to show this kind of data is not easy to create in a simple query.

It should be presented like this:

 year_month_first_buy |  0 |  1 |  2 |  3 | 4
               200904 | 10 |  8 |  5 |  0 | 1
               200905 | 15 | 11 |  9 |  8 |
               200906 | 25 | 20 | 18 |    |

After some research, I found the generate_series function in postgreSQL in order to solve the interval problem.

I have to create one record for each month from the starting date to current month and then I will count the number of users.

But first I need to calculate the difference in months between today and the date I started to sell, you can see below.

SELECT (date_part('year', f) * 12 + date_part('month', f))::integer
FROM age(NOW(), '2009-04-01') f

Then I created one record for each month with the generate_series function, see the code below.

FROM generate_series(0, 
        (date_part('year', f) * 12 + date_part('month', f))::integer
    FROM age(NOW(), '2009-04-01') f)) i

Now I know how to get the month age, I need to create the month list since I started to sell. My first sell was in April 2009, so I need to create one record for each month since this date. The code below generates one row for each month from the start date until now.

  t1.year, t2.month
  (SELECT * 
   FROM generate_series(2009, 
                        date_part('year', NOW())::integer) year) t1,
  (SELECT * 
   FROM generate_series(1, 12) month) t2
  (t1.year = 2009 AND t2.month >= 4)
  (t1.year > 2009 AND t1.year < date_part('year', NOW())::integer)
  (t1.year = date_part('year', NOW())::integer 
   AND t2.month <= date_part('month', NOW())::integer)

The result should be something like this

 year | month
 2009 | 4
 2009 | 5
 2009 | 6
 2016 | 8
 2016 | 9


Using all together

I have the year/month list, I know how to calculate the month count between 2 dates and how to create a series from 0 to this month count.

Now I have to mix all this data in order to generate a list (or a table) to group the count of users.

I have done this with a lot of subqueries, but it will be fast due to the limited amount of data I have.

SELECT year_month, age 
    t1.year || RIGHT('0'||t2.month, 2)::varchar year_month,
    (SELECT (date_part ('year', f) * 12 + 
             date_part ('month', f))::integer 
     FROM age(NOW(), (t1.year::varchar||'-'||t2.month::varchar
                      ||'-01')::timestamp) f) month_age
    (SELECT * 
     FROM generate_series(2009, 
                          date_part('year', NOW())::integer) year) t1,
    (SELECT * FROM generate_series(1, 12) month) t2
    (t1.year = 2009 AND t2.month >= 4)
    (t1.year > 2009 AND t1.year < date_part('year', NOW())::integer)
    (t1.year = date_part('year', NOW())::integer 
     AND t2.month <= date_part('month', NOW())::integer)
  ) months,
  generate_series(0, months.month_age) age

The result is:

year_month | age
    200904 | 0
    200904 | 1
    200904 | 89
    200905 | 0
    200905 | 1
    200905 | 88
    200906 | 0
    201609 | 0

Now the count is the easy part, just select how many users who had done his first buy in the year_month and also bought age months after.

And then you will have to use some function or startegy to convert the age into columns, for some version of PostgreSQL you can use the crosstab function.


It was really difficult to explain, maybe you get confused about some step. Don’t hesitate to leave your comment if you have any question or issue.


Creating AWS Lambda using java and spring framework



AWS Lambda is another way to use cloud computing on Amazon’s AWS.

It allows you deliver you code and run in production without any server management.

It is auto-scale, high available and you pay only when you are using your function.

Creating the Application

For this basic example, I choose the spring as framework because most of my webservices are created using this framework.

If you are using eclipse, you can install the AWS Toolkit for eclipse. It is very helpful during development and testing stages.

First, you have create the pom.xml file:

<project xmlns="" xmlns:xsi=""





As you can see, the dependencies for the spring framework and aws lambda are defined and the plugin maven-shade-plugin is set in the build section.

Below are the structure and the files used in this project.



<beans xmlns=""
 xmlns:xsi="" xmlns:p=""
 xmlns:aop="" xmlns:context=""
 xmlns:jee="" xmlns:tx=""

<context:component-scan base-package="" />

import org.springframework.context.ApplicationContext;

public class Application {
    private static ApplicationContext springContext = null;
    private static ApplicationContext getSpringContext() {
        if (springContext == null) {
            synchronized (ApplicationContext.class) {
                if (springContext == null) {
                    springContext = new ClassPathXmlApplicationContext("/application-context.xml");
        return springContext;
    public static <T> T getBean(Class<T> clazz) {
        return getSpringContext().getBean(clazz);

In the file is created the ApplicationContext using singleton pattern and the ApplicationContext.getBean is encapsulated in the Application.getBean protecting the application context for been accessed from other classes.


import java.util.Calendar;

public class LambdaFunctionHandler implements RequestHandler<String, String> {
    private BasicSample basicSample;
    public String handleRequest(String input, Context context) {
        basicSample = Application.getBean(BasicSample.class);
        context.getLogger().log("AWS Request ID: " + context.getAwsRequestId());
        context.getLogger().log("Input: " + input + " at " + Calendar.getInstance().getTimeInMillis());
        return basicSample.doSomething(input);

In the the RequestHandler interface is implemented in order to receive the AWS Lambda call.


import org.springframework.stereotype.Component;

public class BasicSample {
    public String doSomething(String input) {
        return "Something has done with the input " + input;

In this file the method doSomething uses the input string and returns a new string.

Deploying on AWS

The AWS Toolkit helps us on this step. The only thing we have to do is open the any .java file, open the context menu (right-click), chose “AWS Lambda” and then “Upload function to AWS Lambda…” option.


You will see the window below:


Chose the “Create a new Lambda function” option, type the name BasicSampleFunction and click “Next”.


In the window above, you must create a IAM role and a S3 Bucket for your function.

You also need to change the Memory to 512MB, because with less memory, the application takes longer during cold start.

Click Finish and wait until your function is deployed.

Uploading function code to Lambda _005

Running the Function

It’s time to test our work. Right click on any .java file, chose “AWS Lambda” and then “Run function on AWS Lambda…” option.


In the window below, type any text you want to pass to your lambda function and click “Invoke”

Lambda Function Input _006

In the first execution, it will take some time to start your application (~ 3 sec).

If everything works fine, you will see the output in the console.


If you want to see the log of your call, you can go to the CloudWatch and click on the Log in the left menu.

Look for your function, the log should be /aws/lambda/FunctionName, click on it and a new window will appear.

Chose the first (or the only) Log Stream to see the log.


This is just a start point for the AWS Lambda, you can try some other frameworks instead of spring.

In the next post I’ll show how to create a API Gateway and call the lamba function.


Android: Lyrics by vagalume API


Another API I found searching on internet that surprised me with good documentation and usability is

This API offers info about songs and artists; we can search song by name or phrase.

In this post I’ll show how to get the lyrics of the current playing song

You can get the source code at

Continue reading

Swift: Getting the Levels of Water in São Paulo


I am brazilian and we are facing a water crisis (specially in São Paulo state), the drought is the most severe in the history of São Paulo and the people become interested in knowing the levels in the Cantareira reservoir system.

The state water authority, SABESP, has a website that shows the current levels with daily updates.

An API ( was made based on the data available on this website and we can follow the level changes of each reservoir in the Cantareira reservoir system.

As usual, the source code of this post is available at my github.

Continue reading