Merge Variables/Dictinaries in Ansible

Standard

If you are trying to merge variables in an ansible script and you do not have access to change the ansible configuration (or do not want to do so), follow these steps:

Imagine you have a regular include_vars task:

- include_vars:
  dir: "vars"

The above code adds every file in the vars folder to the facts collection. As an example, consider you have the following file var.yml.

---
site:
  name: "Api Explorer"
  version: "1.0.0-SNAPSHOT"

Step 1 – Create a file inside the homolog folder.

---
site:
  version: "1.0.1"

Step 2 – Create a task in your playbook that loads files and adds them in a separate context, in this case, specific_vars is filled with the contents of files in the homolog folder.

- include_vars:
  dir: "homolog"
  name: specific_vars

Step 3 – Create a task to combine the variables in facts with the variables in specific_vars.

- name: "Combine specific with default variables"
  set_fact: {"{{ item.key }}": "{{ vars[item.key] | combine(item.value) if item.value['keys'] is defined else item.value }}"}
  with_dict: "{{ specific_vars }}"

Now you can run your script and you will see that the value of site.version is 1.0.1 and the variable site.name still exists.

Advertisements

ASP.NET – Serving IIS Express over LAN

Standard

I recently ran into a problem: There is no available server to homologate my feature.

I had to validate it with the marketing manager, so I have no place to do so.

After some research I found a “possible” solution that did not worked for me, it were related to changing the bindings info in some .config file. Definitely it were not the best solution.

Then I decided to appeal to nodeJS and with one single search with the key word “iis express node” I found this repo https://github.com/icflorescu/iisexpress-proxy.

I could not be simpler.

(You need to have the nodeJS and npm installed on your machine)

Follow these steps and start serving through your network:

Step 1:

npm install -g iisexpress-proxy

Step 2:

iisexpress-proxy localPort to proxyPort

Then I have declared my freedom from homologation server limitation.

Feel free to comment below if you find a better solution or any other alternative.

 

Generating series of data in PostgreSQL

Standard

Recently I had received an apparently simple task: “Count the number of returning users, per month age.”, and it should be presented like a cohort analysis.

It should be solved as a simple query and returning something like this:

year_month_first_buy | month_age | count
----------------------+-----------+-------
               200904 |         0 |    10
               200904 |         1 |    8
               200904 |         2 |    5
               200904 |         4 |    1

Then I realized that I have no returning users that bought 3 months after his first buy.

And of course, the better way to show this kind of data is not easy to create in a simple query.

It should be presented like this:

 year_month_first_buy |  0 |  1 |  2 |  3 | 4
----------------------+----+----+----+----+---
               200904 | 10 |  8 |  5 |  0 | 1
               200905 | 15 | 11 |  9 |  8 |
               200906 | 25 | 20 | 18 |    |

After some research, I found the generate_series function in postgreSQL in order to solve the interval problem.

I have to create one record for each month from the starting date to current month and then I will count the number of users.

But first I need to calculate the difference in months between today and the date I started to sell, you can see below.

SELECT (date_part('year', f) * 12 + date_part('month', f))::integer
FROM age(NOW(), '2009-04-01') f

Then I created one record for each month with the generate_series function, see the code below.

SELECT * 
FROM generate_series(0, 
    (SELECT 
        (date_part('year', f) * 12 + date_part('month', f))::integer
    FROM age(NOW(), '2009-04-01') f)) i

Now I know how to get the month age, I need to create the month list since I started to sell. My first sell was in April 2009, so I need to create one record for each month since this date. The code below generates one row for each month from the start date until now.

SELECT
  t1.year, t2.month
FROM
  (SELECT * 
   FROM generate_series(2009, 
                        date_part('year', NOW())::integer) year) t1,
  (SELECT * 
   FROM generate_series(1, 12) month) t2
WHERE
  (t1.year = 2009 AND t2.month >= 4)
OR
  (t1.year > 2009 AND t1.year < date_part('year', NOW())::integer)
OR
  (t1.year = date_part('year', NOW())::integer 
   AND t2.month <= date_part('month', NOW())::integer)

The result should be something like this

 year | month
------+-------
 2009 | 4
 2009 | 5
 2009 | 6
...
 2016 | 8
 2016 | 9

 

Using all together

I have the year/month list, I know how to calculate the month count between 2 dates and how to create a series from 0 to this month count.

Now I have to mix all this data in order to generate a list (or a table) to group the count of users.

I have done this with a lot of subqueries, but it will be fast due to the limited amount of data I have.

SELECT year_month, age 
FROM (
  SELECT
    t1.year || RIGHT('0'||t2.month, 2)::varchar year_month,
    (SELECT (date_part ('year', f) * 12 + 
             date_part ('month', f))::integer 
     FROM age(NOW(), (t1.year::varchar||'-'||t2.month::varchar
                      ||'-01')::timestamp) f) month_age
  FROM
    (SELECT * 
     FROM generate_series(2009, 
                          date_part('year', NOW())::integer) year) t1,
    (SELECT * FROM generate_series(1, 12) month) t2
  WHERE
    (t1.year = 2009 AND t2.month >= 4)
  OR
    (t1.year > 2009 AND t1.year < date_part('year', NOW())::integer)
  OR
    (t1.year = date_part('year', NOW())::integer 
     AND t2.month <= date_part('month', NOW())::integer)
  ) months,
  generate_series(0, months.month_age) age

The result is:

year_month | age
-----------+-----
    200904 | 0
    200904 | 1
...
    200904 | 89
    200905 | 0
    200905 | 1
...
    200905 | 88
    200906 | 0
...
    201609 | 0

Now the count is the easy part, just select how many users who had done his first buy in the year_month and also bought age months after.

And then you will have to use some function or startegy to convert the age into columns, for some version of PostgreSQL you can use the crosstab function.

Conclusion

It was really difficult to explain, maybe you get confused about some step. Don’t hesitate to leave your comment if you have any question or issue.

 

Creating AWS Lambda using java and spring framework

Standard

Introduction

AWS Lambda is another way to use cloud computing on Amazon’s AWS.

It allows you deliver you code and run in production without any server management.

It is auto-scale, high available and you pay only when you are using your function.

Creating the Application

For this basic example, I choose the spring as framework because most of my webservices are created using this framework.

If you are using eclipse, you can install the AWS Toolkit for eclipse. It is very helpful during development and testing stages.

First, you have create the pom.xml file:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 <modelVersion>4.0.0</modelVersion>
 <groupId>br.com.presba</groupId>
 <artifactId>presba-lambda</artifactId>
 <version>0.0.1</version>
 <name>presba-lambda</name>

 <properties>
 <spring.version>4.0.1.RELEASE</spring.version>
 <java.version>1.8</java.version>
 </properties>

 <dependencies>
 <dependency>
 <groupId>com.amazonaws</groupId>
 <artifactId>aws-lambda-java-core</artifactId>
 <version>1.1.0</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-core</artifactId>
 <version>${spring.version}</version>
 </dependency>

 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-context</artifactId>
 <version>${spring.version}</version>
 </dependency>
 </dependencies>

 <build>
 <plugins>
 <plugin>
 <groupId>org.apache.maven.plugins</groupId>
 <artifactId>maven-shade-plugin</artifactId>
 <configuration>
 <createDependencyReducedPom>false</createDependencyReducedPom>
 </configuration>
 <executions>
 <execution>
 <phase>package</phase>
 <goals>
 <goal>shade</goal>
 </goals>
 </execution>
 </executions>
 </plugin>
 </plugins>
 </build>
</project>

As you can see, the dependencies for the spring framework and aws lambda are defined and the plugin maven-shade-plugin is set in the build section.

Below are the structure and the files used in this project.

presba-lambda
    src/main/java
        br.com.presba
            dao
                BasicSample.java
            Application.java
            LambaFunctionHandler.java
    src/main/resources
        application-context.xml

application-context.xml

<beans xmlns="http://www.springframework.org/schema/beans"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p"
 xmlns:aop="http://www.springframework.org/schema/aop" xmlns:context="http://www.springframework.org/schema/context"
 xmlns:jee="http://www.springframework.org/schema/jee" xmlns:tx="http://www.springframework.org/schema/tx"
 xmlns:task="http://www.springframework.org/schema/task"
 xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.2.xsd 
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd 
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd 
http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-3.2.xsd 
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.2.xsd 
http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task-3.2.xsd">

<context:component-scan base-package="br.com.presba" />
</beans>

Application.java

package br.com.presba;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class Application {
    private static ApplicationContext springContext = null;
    private static ApplicationContext getSpringContext() {
        if (springContext == null) {
            synchronized (ApplicationContext.class) {
                if (springContext == null) {
                    springContext = new ClassPathXmlApplicationContext("/application-context.xml");
                }
            }
        }
        return springContext;
    }
    public static <T> T getBean(Class<T> clazz) {
        return getSpringContext().getBean(clazz);
    }
}

In the Application.java file is created the ApplicationContext using singleton pattern and the ApplicationContext.getBean is encapsulated in the Application.getBean protecting the application context for been accessed from other classes.

LambdaFunctionHandler.java

package br.com.presba;

import java.util.Calendar;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import br.com.presba.dao.BasicSample;

public class LambdaFunctionHandler implements RequestHandler<String, String> {
    private BasicSample basicSample;
    public String handleRequest(String input, Context context) {
        basicSample = Application.getBean(BasicSample.class);
        context.getLogger().log("AWS Request ID: " + context.getAwsRequestId());
        context.getLogger().log("Input: " + input + " at " + Calendar.getInstance().getTimeInMillis());
        return basicSample.doSomething(input);
    }
}

In the LambdaFunctonHandler.java the RequestHandler interface is implemented in order to receive the AWS Lambda call.

BasicSample.java

package br.com.presba.dao;

import org.springframework.stereotype.Component;

@Component
public class BasicSample {
    public String doSomething(String input) {
        return "Something has done with the input " + input;
    }
}

In this BasicSample.java file the method doSomething uses the input string and returns a new string.

Deploying on AWS

The AWS Toolkit helps us on this step. The only thing we have to do is open the any .java file, open the context menu (right-click), chose “AWS Lambda” and then “Upload function to AWS Lambda…” option.

Selection_002

You will see the window below:

_003

Chose the “Create a new Lambda function” option, type the name BasicSampleFunction and click “Next”.

_004

In the window above, you must create a IAM role and a S3 Bucket for your function.

You also need to change the Memory to 512MB, because with less memory, the application takes longer during cold start.

Click Finish and wait until your function is deployed.

Uploading function code to Lambda _005

Running the Function

It’s time to test our work. Right click on any .java file, chose “AWS Lambda” and then “Run function on AWS Lambda…” option.

Selection_002

In the window below, type any text you want to pass to your lambda function and click “Invoke”

Lambda Function Input _006

In the first execution, it will take some time to start your application (~ 3 sec).

If everything works fine, you will see the output in the console.

Selection_007Troubleshooting

If you want to see the log of your call, you can go to the CloudWatch and click on the Log in the left menu.

Look for your function, the log should be /aws/lambda/FunctionName, click on it and a new window will appear.

Chose the first (or the only) Log Stream to see the log.

Conclusion

This is just a start point for the AWS Lambda, you can try some other frameworks instead of spring.

In the next post I’ll show how to create a API Gateway and call the lamba function.

 

Android: Lyrics by vagalume API

Standard

Another API I found searching on internet that surprised me with good documentation and usability is http://api.vagalume.com.br

This API offers info about songs and artists; we can search song by name or phrase.

In this post I’ll show how to get the lyrics of the current playing song

You can get the source code at https://github.com/rpresb/android-letradamusica.

Continue reading

Swift: Getting the Levels of Water in São Paulo

Standard

I am brazilian and we are facing a water crisis (specially in São Paulo state), the drought is the most severe in the history of São Paulo and the people become interested in knowing the levels in the Cantareira reservoir system.

The state water authority, SABESP, has a website that shows the current levels with daily updates.

An API (https://github.com/rafaell-lycan/sabesp-mananciais-api) was made based on the data available on this website and we can follow the level changes of each reservoir in the Cantareira reservoir system.

As usual, the source code of this post is available at my github.

Continue reading

Chrome Extension with the forecast.io API

Standard

We can change the way that we interact with the browser and the way that we absorb all the information that are relevant to us simply by customizing the behavior of the chrome using extensions.

The greatest thing about chrome is that it is easy to create a great chrome extension.

Extensions like RSS readers and e-mail notifications are the most common in the chrome web store.

In this post I will show you how to create a simple extension that accesses an external API hosted by forecast.io

The source code of the extension is available in my github https://github.com/rpresb/chrome-extension-forecastio

Continue reading