Friday, July 19, 2019

EC2 Instances

EC2 instantes are the equivalent of Amazon AWS for Virtual Servers offering the biggest set of possibilities to combine CPU, RAM and a separate storage alternatives from small sizes just to process to instances dedicated to storage and data management.

Also AWS provide a robust framework to manage network access and network configuration with possibility to create subnets to separate business areas in the same account and protect data, also using a firewall out of the box reducing the impact of DoS attacks and reusing rules with small effort.

Allocation of public IP addresses and DNS management is simple and add a lot of value to the final solution.

In network space, AWS offer the posibility to create virtual private networks, that act like a private cloud to communicate and protect in different stages the servers created as EC2 instances, an excellent usage of this concept of virtual networks is to create separate environments for SIT, UAT and Production, in order to avoid mixes that could affect the final result in production environments.

Other awesome feature that offera EC2 is the posibility to create templates of a particular instance, this feature, could be use as an alternative to easy create servers to solve particular problems that let you escalate horizontaly, adding more instances to process a big load of information, also AWS offers especialized load balancing services called Elastic Load Balancing, that let you manage multiples instances to according with the load received and being billed using the low cost instances alternatives.

The EC2 instances combined with the other services of AWS cloud let design a top end solution to cover almost all business requirements, giving infinite power to manage intricate business logic and non functional requirements.

EC2 offers a practical set of billing schemas that help the customers, developers, students and bloggers use the platform without fell that are paying to much for it.
  • On demand instances billed by hour is a good solution for projects that are starting and need to setup development, testing and first versions of production environments.
  • Spot instances that let use take advantage of off pike times and low cost platform to process high volumes of information paying lower rates is simple to manage and implement.
  • Long term instances could manage contracts with and without upfront payments for more than 1 year ideal for production environments of applications already established.
EC2 is equivalent to Google virtual servers or Digital Ocean droplets, in my perspective, AWS EC2 Instances are more flexible and powerfull than the other ones because shows a clear separation between the processing unit and the storage.


Wednesday, May 22, 2019

Why split your application in microservices?

Why we need to split the logic?

When a nontechnical person things in an application, he visualizes it as an engine to solve a problem in the company in order to generate money, this monolithic concept is widely propagated across business managers and business owners. It's not wrong, because in order to generate profit, all of the parts should act as one big application, however, the different components should be synchronized and run in a coordinate way. The ability to align those multiple parts is responsibility of the enterprise architects who has to understand the view of the managers/owners that are receiving the profit of the engine and is paying to keep it running in a effective way.

In business side, every day the drivers are changing and the company should adapt to those new drivers smoothly and promptly without loose control over the process and ideally reducing cost and increasing quality. The implementation of each change should be performed in all parts of the company, including applications, and it's better if the applications are modular and easy to assembly returning to the original best practice to have loosely coupled software components, reducing the dependencies between the components and making easy to replace or update one component in the big structure.

In order to reach this model, each task identified on the company, should be implemented as a service and must run independently of the other components, at this point, it's very important to have defined the input, output and exception management of the task that the service will represent, each service will receive a set of parameters and will execute a process to transform the parameters in an output following a deterministic process. Also, the service could generate an action over information, such as write a file, send an email, modify information in a database or over any other resources available in the platform. When you have to update a process due to a new requirement from the business, you have change just the services that are covering tasks that are impacted for this change, trying to keep the input, output and exception process according with the original, if it's not possible, it should modify just an small part of the communication. 


Also, each component should run in an independent environment, allocating his own resources and avoiding coalitions with other task, in the same way, it could be run using a pool of resources able to run a lot of instances of the same task in parallel without mixing data or fighting for resources; in older architectures models this could be solved with a finite pool of instance of the same component pre loaded, this concept solved a lot of issues but it fails when the pooled instances run out of resources and finally generates a crash in the components. In new architectures based in cloud you could configure unlimited instances on-demand that will be instantiated as soon as are invoked. Those instances should not generate any consumption of resources meanwhile it's idle and also doesn't generate any cost to the company.

To communicate micro services we have to establish a common language to move information between the components, usually, this language requeries to identify the main entities that are playing inside the company processes and generate a detailed documentation of each one, taking care of all the posible stages and all the views that could have each entity.

Micro services must be coordinated by an orquestation process that knows all the capabilities of each service and manage the business logic to generate value to the user, this process act a workflow engine with all the steps and conditions requeried to process a business job with all the exceptions and variations.

Monday, November 12, 2018

Serverless strategy - Micro services vs Traditional Applications


When you are implementing a new application in your organization, you are onboarding in the world of infrastructure management, one of the more expensive areas of the technology and have to start dealing with terms like "end of life" or "extended warranty fees" that usually are more expensive than the application that is generating money for your company.

To setup an application you have to install a server that should run 7/24 because you don't know when the customer or your employees will access the application, you have to buy a good machine that support the load to run all day and all night, then you have to ensure that it will have power and network all time, it's an additional monthly cost. Also you have to hire a company or a team to keep it running and provide support to internal and external users.


application running in a simple infraestructure
General understanding where are the cost of infrastructure in the implementation of an in-house application.

If this fragile infrastructure fails just in the moment that a customer is accessing their information, you have to deal with a unhappy customer.

When you scale it following the growing of your organization, you have to add more servers, more locations and bigger infrastructure support team, that will cost the organization a huge amount of money on daily basis, reducing the space to investing in business growing and a consuming big amounts of money on bills to pay on monthly basis.

The first strategy to solve this issue with multiple locations is to consolidate all the applications in one location with all the technical guarantees to run it 7/24, be compliant with regulations, requirements and keeping the customers and employees happy with your systems and your business. This strategy is costly at the beginning because you have to invest a lot of money in the location, then you have to maintain the infrastructure running renewing the machines periodically, protecting from different kinds of attacks and dealing with the obsolescence of the infrastructure software.

Now, I make a question.


Why do you have to paid a lot of time for an infrastructure that you use 40% of the time to generate income?



The application is not be accessing during night/off work hours, also, during the the working ours is not being used all time, in addition, your company is not using all functionalities all time. What happen if I uninstall the sever that process my taxes during the year and I'm only install it by tax session? What happen if the payroll module shutdown during the month and just start it the day that the company calculate the payroll until the day that pay to the employees and print pay stubs?

In the model that we are evaluating now, the company have to pay in full for the cost of the infrastructure service besides it's in use or not, you have the server connected, you are paying for the rent of the space, you are paying for the technical support, you are paying the license of the operative system, you are paying the energy bill.


What happen if you could pay just for the time and resources that the company is using in a particular period of time?


Now days, the experts in infrastructure are offering a high performance, dependable, secure platform that you could rent by second/minute/hour/day of month, that let you implement whatever you want to generate the income that you like. This platform is the Cloud. Call it AWS, Google Cloud, VMWare or any other solution available, they have the servers, they have the experts, they pay the energy bill, they pay for the license and the provide everything to the customer according with real needs.

As part of those services, the cloud providers offer the opportunity split your application in small parts in order to separate functionalities in a very granular level, and create a new micro environment to run each part of the application and this micro environment will be stay down until the functionality is needed, a simple process trigger the micro environment, this event could be to put a file in a specific folder, or receive a message from a particular source; if you need to use the functionality more than one time in parallel, the cloud will provide you multiple copies of the micro environment to run the functionality in parallel without delays in customer side. Now, you have to pay a rent of the server for the number of seconds that process use to return the information. This cost is very small comparing to the cost of the infrastructure that you have to pay if you have a dedicated server to run this functionality available 7/24.

Other awesome advantage to use this serverless architecture is that you have performance on-demand, because for any invocation that you perform on the application, you will reserve a separate, independent instance with the requirements adapted to the functionally that is running, generating a feeling in the end user side that the server never get overloaded and the information and functionalities are 100% of the time available.

It's particular helpful if you have a business need to resolve particular workloads in response to a business event, like a marketing campaign, sales season or monthly cycles with defined process, that generate a high volume of use of a particular functionality of the application during a defined period of time from business side, this period of time is identified like an event in the architecture such as a new input of a client request in the database or upload a file with a new order in the website; this event will trigger the function and the platform will allocate the resources meanwhile the function process the information.Once the process is finish the resources will be released.

In order to reach this model, the application should be develop using the strategy of micro services, generating a perfect split over the functionalities that let identify all the actions that are possible to perform and a clear sequence between the actions following a workflow that it is aligned with business needs and cover all the details of the process. All the actions should be configured as micro services with a clear input, trigger and output and it should be invoked by an big orchestation process similar to an ESB (Enterprise Service Bus), that knows the workflow and the actions involved in the workflow.

I'll detail in the next article the idea of ESB and workflows to coordinate micro services.

Conclusion:


It's cheaper to implement and maintain microservices as strategy to implement applications in the organization, however, it should be consider from the beginning of the project.

Saturday, October 27, 2018

Cloud storage advantage

These days every company has to keep information about their business in digital format, data is related to accounting, operations, clients or whatever the company do to generate money.
Every single piece of this information has  different condition to be store, keep and access according with their nature and sensitivity. The drivers of those conditions are regulatory, strategic or controls; and each section has different set of stakeholders.

In order to keep this big amount of information aligned and fulfilling the big amount of conditions, each company has two possible ways:

1. Build their own infrastructure to keep the information safe.

2. Delegate this management to an expert partner.

Both options are valid and extensive uses by different companies in all sectors of the economy across the world.We will analyze both.

1. Own infrastructure:

Due to the huge set of alternatives that the company could select on this way, it makes this way so complicated, you could start from keep all the information in the hard drive of your computer in a good identified set of folders or you could build a huge data center in a remote location.

In general, each requirement that you have to keep your data will generate an extra cost to your company.

If you have to keep three spreadsheets with accounting information from your first year, those spreadsheets should be stored in a pen drive or a CD that have to pay part of your rent during next X number of years until the Tax authorities said that it's not relevant.

Escalate it to the next level, your company has a small sales team that use a local CRM open source application, you have this application running in the powerful desktop that your company have under your desk. You paid a big amount of money for your machine, you have to keep it running all time and you have to contract a guy to keep it in good shape, also, you have to keep a separate backup of the information in CRM database for at least 1 month. You are investing more than one hour per day in keep your infrastructure running and paying an extra cost in your electricity bill on monthly basis. Also, you have to keep the backup safe in a second location that also should have to pay for rent.

Now, your small CRM is a headache, that is consuming time and resources and generating operative dependencies and additional stress changing the focus of the company to support systems in stead of generate money.

If you escalete it to the next levels, could understand that every single piece of data that you put in your company, will generate an infraestructure cost in the future.

2. Delegate it to an expert, cloud expert:


In the same case, you could start working with the spreadsheets in cloud platform editing it online with free tools, and storing it in an online storage that with bill you a couple of cents per month for the usage, once you stop to use this spreadsheet on daily basis, you could move it to a cold storage that generate just a couple of cents per year.

If you escalete it to the next level, you could buy a CRM for about five dollars per month including infrastructure and backup according with complex requirements according with your business needs; if you need to grow your business, your application could configure to escalete in infraestructure to support more traffic; if you need more applications just add to your package in the same infraestructure and increase your bill according with your needs.





Thursday, September 3, 2015

How to create plain files in Java

How to create plain files in Java

In this section we will show a simple way to create a text file using Java. We use common tools such as Eclipse IDE Luna and JDK 1.7 running in a Windows 10 PC.

1. Go to option File->New then in the popup en la window lookup for Java Project select and then click in  "Next >"

2. You should assign a new name to the project,in our case we will use "ArchivosPlanos", Eclipse give you options to change virutal machine version and directories, at this moment we will continue with default values, to continue click in "Finish" button.
Setteo de un nuevo proyecto
3. Eclipse was created a new project with the following structure:
Estructura del nuevo proyecto
4. Now right click in folder src and then select the option New->Package, Eclise will ask to input the name of the package to use, we will use co.net.seft.entrenamiento.archivosplanos then click in "Finish" button
Creando un nuevo paquete
5. Now we will to create a new class inside the package created, this new class should have their main method. Right click over the package that just created then select the option New->Class then Eclipse will show the following screen, we should fill the field with the class name ArchivoPlanoManager, then click in checkbox of public static void main (String argv[]).


6. Eclipse was generated a empty class with following structure. :

package co.net.seft.entrenamiento.archivosplanos;

public class ArchivoPlanoManager {

public static void main(String[] args) {
// TODO Auto-generated method stub

}

}

7. Vamos a construir dos métodos, uno para escribir el archivo y otro para leerlo. El método de escritura se va llamar writer y el metodo de lectura reader.
public void reader(String fileName){

}

public void writer(String fileName){

}

8. En el método writer vamos a crear un file que escriba cuatro lineas, cada una con el numero de linea e igual numero de caracteres 'x' de la siguiente manera.
public void writer(String fileName) {
FileWriter archivo = null;
PrintWriter printWriter = null;
try {
archivo = new FileWriter(fileName);
printWriter = new PrintWriter(archivo);

for (int i = 0; i < 4; i++) {
String linea = "" + i;
for (int j = 0; j < i; j++) {
linea += "x";
}
printWriter.println(linea);
}

} catch (Exception e) {
e.printStackTrace();
} finally {
try {
if (null != archivo)
archivo.close();
} catch (Exception e2) {
e2.printStackTrace();
}
}
}

En este caso el método writer recibe como parámetro el nombre del file a escribir, lo abre mediante el objeto FileWriter y coloca un apuntador de salida (PrintWriter) en el archivo abierto, después ejecuta la lógica de negocio para crear la linea, una vez completada la creación de la linea, la escribe en el archivo mediante el comando printWriter.println(linea);Una vez escritas todas las lineas, se procede a cerrar el file; se utiliza en el bloque finally para evitar que el archivo quede abierto si existe alguna falla en la escritura.
9. En el método reader vamos a leer el file que escribimos e imprimir por pantalla el contenido del file.
public void reader(String fileName) {

File archivo = null;
FileReader fileReader = null;
BufferedReader bufferedReader = null;

try {
archivo = new File(fileName);
fileReader = new FileReader(archivo);
bufferedReader = new BufferedReader(fileReader);

String linea;
while ((linea = bufferedReader.readLine()) != null) {
System.out.println(linea);
}

} catch (Exception e) {
e.printStackTrace();
} finally {
try {
if (null != fileReader) {
fileReader.close();
}
} catch (Exception e2) {
e2.printStackTrace();
}
}
}
Utilizamos un objeto File para recibir el archivo, y lo abrimos mediante un objeto FileReader, el contenido lo extraemos mediante un objeto BufferedReader, sobre el cual vamos a iterar para leer cada una de las lineas. Utilizamos un ciclo while con el código (linea = bufferedReader.readLine()) != nullpara realizar la impresión de las lineas hasta que encuentre el fin de linea del file.
Una vez alcanzamos el final de archivo procedemos a cerrar el objeto BufferedReader dentro del bloque finally de la excepcion para evitar que quede abierto el file si algo sale mal en la lectura.

10. Finalmente ponemos todo junto para ser invocado con el main de la siguiente manera.
public static void main(String[] args) {
ArchivoPlanoManager archivoPlanoManager=new ArchivoPlanoManager();
String archivo="c:/Temp/pruebas.txt";
archivoPlanoManager.writer(archivo);
archivoPlanoManager.reader(archivo);
}

Instanciamos la clase que acabamos de crear en la lineas ArchivoPlanoManager archivoPlanoManager=new ArchivoPlanoManager();  posteriormente invocamos el método writer y despues le método reader. La salida de consola de este programa debe ser igual al contenido del archivo.
0
1x
2xx
3xxx
Y el código poniendo todo junto seria:
package co.net.seft.entrenamiento.archivosplanos;

import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.PrintWriter;

public class ArchivoPlanoManager {

public static void main(String[] args) {
ArchivoPlanoManager archivoPlanoManager=new ArchivoPlanoManager();
String archivo="c:/Temp/pruebas.txt";
archivoPlanoManager.writer(archivo);
archivoPlanoManager.reader(archivo);
}

public void reader(String fileName) {

File archivo = null;
FileReader fileReader = null;
BufferedReader bufferedReader = null;

try {

archivo = new File(fileName);
fileReader = new FileReader(archivo);
bufferedReader = new BufferedReader(fileReader);

String linea;
while ((linea = bufferedReader.readLine()) != null) {
System.out.println(linea);
}

} catch (Exception e) {
e.printStackTrace();
} finally {
try {
if (null != fileReader) {
fileReader.close();
}
} catch (Exception e2) {
e2.printStackTrace();
}
}

}

public void writer(String fileName) {
FileWriter archivo = null;
PrintWriter printWriter = null;
try {
archivo = new FileWriter(fileName);
printWriter = new PrintWriter(archivo);

for (int i = 0; i < 4; i++) {
String linea = "" + i;
for (int j = 0; j < i; j++) {
linea += "x";
}
printWriter.println(linea);
}

} catch (Exception e) {
e.printStackTrace();
} finally {
try {
if (null != archivo)
archivo.close();
} catch (Exception e2) {
e2.printStackTrace();
}
}
}

}

Saturday, August 23, 2014

Best practices to setup environments

According my experience, the best way to build the environments of one new applications is to draw a diagram...yes, draw a diagram.
 
When you build a diagram, you are able to identify the dependencies and try to solve your concerns about the way to feed your application, the main duties of each components and the workflow of each process.
 
In a formal way, you should use UML and define the model in a schema of 4+1 views to identify and understand all the domains of the application. In a practical way, you should build a diagram easy to understand and easy to share and explain in normal words what will you do; and this diagram could be a handmade diagram in a piece of sheet.
 
Once you have the diagram,  you should identify the external dependencies and identify the posibles ways to feed each of this interfaces. These ways will give you the requeriments to build the environments, I.E if you are able to capture information manually in production environment or put the same information directly in the database for others environments.
 
The conclusion of this analisys will give you a view of the posible features of each of the environments and let you know what are the alternatives to replace a system that is not available to setup in some of the enviroments, I.E, the accounting system is not available to support product estress test...you need to address this output feed to another system in order to analise it in other context and it should be included in the test plan.