Saturday, December 30, 2017

Configuring SSL Termination with WSO2 API Manager

When you are setting up WSO2 API manager fronted with a load balancer, you have the option of terminating SSL for HTTPS requests. So the load balancer will be decrypting incoming HTTPS messages and forwarding them to the Carbon servers as HTTP. So basically the APIM should be working with HTTP requests, after surpassing the load balancer. This is useful when you want to reduce the load on your Carbon servers due to encryption. To achieve this, the load balancer should be configured with TLS termination and the Tomcat RemoteIpValve should be enabled for Carbon servers.

I am going to describe the steps you have to follow for your exact requirement, from the beginning so that you can follow.

In these steps, note the below facts.

1. Configuring Load balancer

 

I am using nginx as the load balancer. As we are not competent with the F5 which you use as the load balancer, we will not be able to provide guidance/scripts to configure F5. I am providing the following guide with Nginx so that you can have the basic understanding on what has to be done via the load balancer for this task. You may use this knowledge to configure F5.
Configure the /etc/nginx/sites-enabled/default file as below.

server {
       listen 443;
       ssl on;
       ssl_certificate /etc/nginx/ssl/nginx.crt;
       ssl_certificate_key /etc/nginx/ssl/nginx.key;
       location /apimanager/carbon {
           index index.html;
           proxy_set_header X-Forwarded-Host $host;
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header Host $host;
           proxy_set_header X-Real-IP $remote_addr;
           proxy_set_header X-Forwarded-Proto $scheme;
           proxy_pass http://localhost:9763/carbon;
           proxy_redirect  http://localhost:9763/carbon  https://localhost/apimanager/carbon;
           proxy_cookie_path / /apimanager/carbon/;
       }
 
       location ~ ^/apimanager/store/(.*)registry/resource/_system/governance/apimgt/applicationdata/icons/(.*)$ {
           index index.html;
           proxy_set_header X-Forwarded-Host $host;
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header Host $host;
           proxy_set_header X-Real-IP $remote_addr;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header X-Forwarded-Proto $scheme;
               proxy_pass http://localhost:9763/$1registry/resource/_system/governance/apimgt/applicationdata/icons/$2;
       }
 
 
       location ~ ^/apimanager/publisher/(.*)registry/resource/_system/governance/apimgt/applicationdata/icons/(.*)$ {
           index index.html;
           proxy_set_header X-Forwarded-Host $host;
           proxy_set_header X-Forwarded-Server $host;
       proxy_set_header Host $host;
           proxy_set_header X-Real-IP $remote_addr;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header X-Forwarded-Proto $scheme;
           proxy_pass http://localhost:9763/$1registry/resource/_system/governance/apimgt/applicationdata/icons/$2;       
      }
 
       location /apimanager/publisher {
          index index.html;
          proxy_set_header X-Forwarded-Host $host;
          proxy_set_header X-Forwarded-Server $host;
         proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_pass http://localhost:9763/publisher;
          proxy_redirect  http://localhost:9763/publisher  https://localhost/apimanager/publisher;
          proxy_cookie_path /publisher /apimanager/publisher;
      }
 
      location /apimanager/store {
          index index.html;
          proxy_set_header X-Forwarded-Host $host;
          proxy_set_header X-Forwarded-Server $host;
         proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_pass http://localhost:9763/store;
          proxy_redirect http://localhost:9763/store https://localhost/apimanager/store;
          proxy_cookie_path /store /apimanager/store;
       }

       location / {
              proxy_pass http://localhost:8280;
       }
}

Certificate generation for nginx has to be done. Follow https://docs.wso2.com/display/AM210/Adding+a+Reverse+Proxy+Serve for it.
And then start nginx server.

Next file configurations are related to configuring WSO2 API Manager.

2. tomcat/catalina-server.xml file configuration 

 

Do the following configs in <CARBON_HOME>/repository/conf/tomcat/catalina-server.xml

a) Enabling RemoteIpValve for Carbon servers

Configure RemoteIPValve in <CARBON_HOME>/repository/conf/tomcat/catalina-server.xml as below.
<Valve className="org.apache.catalina.valves.RemoteIpValve" 
remoteIpHeader="X-Forwarded-For" protocolHeader="X-Forwarded-Proto" />
 b) Set proxy port and hostname

 <Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
              port="9443"
             proxyPort="443"
               hostname="localhost"
              bindOnInit="false"
              sslProtocol="TLS"
---
--
/> 

3. carbon.xml configuration


Configure <APIM-HOME>/repository/conf/carbon.xml file as below.
  • Uncomment following element, 
        <HttpAdminServices>*</HttpAdminServices>
  • Set,  
        <EnableHTTPAdminConsole>true</EnableHTTPAdminConsole>

  • Set hostname,
        <HostName>localhost</HostName>
        <MgtHostName>localhost</MgtHostName>

4. site.json files of web apps

a)
  • Edit the <APIM_HOME>/repository/deployment/server/jaggeryapps/store/site/conf/site.json file with the context and request URL as shown below.
  • This is done to configure the reverse proxy server for WSO2 API Store, so that you can route the requests that come to the store through a proxy server.
"reverseProxy" : {
        "enabled" : true, 
        "host" : "localhost", // If the reverse proxy does not have a domain name use the IP
        "context":"/apimanager/store",
        "regContext":"" // Use this only if a different path is used for the registry
    }
b)
  • Edit the <APIM_HOME>/repository/deployment/server/jaggeryapps/publisher/site/conf/site.json file with the context and host as shown below.
  • This is done to configure the reverse proxy server for WSO2 API Publisher, so that you can route the requests that come to the publisher through a proxy server. 
"reverseProxy" : {
        "enabled" : true, 
        "host" : "localhost",//If the reverse proxy does not have a domain name use the IP
        "context":"/apimanager/publisher",
        "regContext":"" // Use this only if a different path is used for the registry
    } 

5. Configuring api-manager.xml file.

  • Change the value of KeyValidatorClientType to WSClient in the <APIM_HOME>/repository/conf/api-manager.xml file.
  • You need to make this change when you change the value of the host, because requests that are made to the Key Manager will also start getting routed through the reverse proxy; therefore, this needs to be over HTTP instead of TCP, which is Thrifts underlying protocol.
        <KeyValidatorClientType>WSClient</KeyValidatorClientType>
  • Change gateway endpoint urls displayed on store,
         <GatewayEndpoint>http://localhost,https://localhost</GatewayEndpoint>  
    
    
  • Set Store URL to be linked and from publisher,
         <APIStore>
                <URL>https://localhost/apimanager/store</URL>
         ---
         </APIStore>
    This is it..!
  •   
 
 

Sunday, July 2, 2017

WSO2 Puppet Deployment

WSO2 Products are accompanied with puppet modules which make your life easier when setting up and configuring a product as per the requirement/the deployment architecture. I am going to provide an introduction and guide on how to use these puppet modules for development or deployment purposes.

So if you are a developer and want to customize a WSO2 puppet module (to facilitate further flexibility or add more parameterized configurations), this post would be a good starter. 

And also if you are a user who directly want to setup/configure a puppet environment to deploy a certain enterprise deployment, you may read this.

WSO2 Puppet architecture was changed completely within last year and so now the puppet modules of each wso2 product are in separate git repositories as opposed to the old structure where all-were-in-one. The old WSO2 puppet-modules repository can be found here if you just want to have a lookup. That is now been deprecated and all the latest product related puppet scripts are written/been written under the new architecture, which I am going to describe here.

What is done by puppet...?

 

Before reading further let's clarify, what puppet does with respect to WSO2 products ? We have to understand this.

For a beginner into puppet, and for whom being ready to tackle with WSO2 puppet modules I may introduce puppet as below and this is very simple and premature introduction on what puppet does. (this concept could be common for any other puppet module too )

Following diagram (Figure 1) illustrate what is occurred simply when we use WSO2 APIM 2.1.0 puppet module to deploy and configure the product in production environment.
 
Figure 1




I guess you didn't understand this 100%. Don't worry. :-D .I am describing.


In the repository-which we call as WSO2 puppet module (for instance take wso2-apim-2.1.0 puppet module), there are configuration files acting as templates for each file that needs to be edited/configured in a product deployment. eg. axis2.xml, carbon.xml, master-datasources.xml.

The difference between an actual vanila product pack's config file and a related puppet template file is that the latter has been replaced with variables/parameters in order to change their values at runtime.

And a puppet module basically includes files (hiera files) with lists of values to be passed to each parameter/variable in each those config files. These values, which we call as hiera data are defined separately for each deployment pattern (or profile if available).

So when we "run" puppet, 3 basic steps are executed by puppet, as mentioned in the above figure.

1 - Apply the configuration data (of the required pattern), into the puppet template files.
2 - Replace the vanilla wso2am-2.1.0 product's configuration files with the modified template files of step 1.
3 - Copy the modified product pack, in step 2, into the production environment and start the product server.


Ok, now you know what we do with puppet, so shall we move in deeper. First we may clarify, the parts and particles of WSO2 puppet modules.

Organization of WSO2 Puppet repositories


If you are going to work with a certain WSO2 product (for a puppet deployment), you may have to deal with 3 functional components, which are found as git repositories.
  1. The certain WSO2 product related repository
  2. puppet-base repository
  3. puppet-common repository
Both 2 and 3 are required for a puppet deployment of a WSO2 product.

1. The certain WSO2 product related repository

Each WSO2 product has a puppet-module repository. (i.e. puppet-apim, puppet-is, puppet-das, puppet-esb, puppet-iot, puppet-ei). Most of these has been released for latest product release ( as per the status by June 2017 ) and please find the puppet module repository list in here. These are consisted of puppet scripts that support multiple patterns of deployment and multiple profiles if available.

Let's take WSO2 API Manager puppet modules for instance. It consists of 3 puppet modules which are related to WSO2 APIM product. They are as below and the specif product related to each module is mentioned infront.
  1. wso2am_runtime - WSO2 API Manager 2.1.0
  2. wso2am_analytics - WSO2 APIM Analytics Server 2.1.0
  3. wso2is_prepacked - Pre-packaged WSO2 Identity Server 5.3.0 (for IS  as  Key Manager APIM deployment)
And this wso2am_runtime module includes puppet scripts which facilitate deployment of APIM in 7 deployment patterns, with 5 APIM profiles.


2. puppet-base repository

WSO2 base puppet repository can be found in here. Puppet-base is also another "puppet module" according to the puppet perspective. This provides features for installing and configuring WSO2 products. On high level it does the following:
  1. Install Java Runtime
  2. Clean CARBON_HOME directory
  3. Download and extract WSO2 product distribution
  4. Apply Carbon Kernel and WSO2 product patches
  5. Apply configuration data
  6. Start WSO2 server as a service or in foreground

3. puppet-common repository

WSO2 Puppet Common repository provides files required for setting up a Puppet environment to deploy WSO2 products.
  • manifests/site.pp: Puppet site manifest
  • scripts/base.sh: Base bash script file which provides utility bash methods.
  • setup.sh: The setup bash script for setting up a PUPPET_HOME environment for development work.
  • vagrant A vagrant script for testing Puppet modules using VirtualBox.


Setting up a puppet environment


There are basically 2 approaches to setup a puppet environment.
  1. Using vagrant and Orcle VirtualBox
  2. Master agent environment
It is recommended to select the appropriate approach considering the requirement.

1. Using vagrant and Orcle VirtualBox

 

Vagrant can be used to setup the puppet development environment to easily test a WSO2 product's Puppet module.

In this approach, Vagrant is used to automate creation of a VirtualBox VM (Ubuntu 14.04) and deploy/install the WSO2 product using the WSO2 puppet modules.

This approach is very easier than Master-agent approach considering the convenience of setup. But this is less convenient in the case of debugging for errors as vagrant takes much time to up a WSO2 product with puppet as the process includes creating a Virtual Machine in Virtual Box too. If you are developing a WSO2 puppet module, from beginning, this is not the recommended approach. But if you are not a newbie to puppet, and so have a good expertise on how puppet modules works with WSO2 products, then you may use this approach (as u will make less errors).

And you cannot use this puppet environment to deploy and install a certain WSO2 product into an actual production environment. Because, this install the product into a VirtualBox Virtual Machine which is created automatically on the go.

For the steps to follow to use this approach follow the official WSO2 documentation Wikis in github here.

2. Master agent environment

 

  Master-agent environment can be used to deploy/install WSO2 products in actual production environments. And also if you are developing a puppet module from the beginning or doing major customizations to the existing puppet modules and your development task would take multiple days/weeks, it is better to follow this approach. Because this is convenient in the case of debugging, testing time for each run, re-running after customizations, etc. But this is bit cumbersome, to setup this master-agent environment as it takes much time and and also need multiple OS instances/computers.

To setup a master-agent puppet environment with WSO2 puppet modules, follow the steps in official WSO2 Documentation Wikis in github.


References:

https://github.com/wso2/puppet-base/blob/master/README.md
https://github.com/wso2/puppet-common/blob/master/README.md

Sunday, May 28, 2017

Performance Testing-Monitoring-Analyzing for WSO2 Products

Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device [1]. Performance of a software application/product/service can be measured/tested via load tests.

Performance of WSO2 products application systems too are widely tested via these load tests. Other than the performance tests, the JVM heap, CPU performance too can be monitored to determine the causes for certain performance issues of a system/application/product.

In this post I will discuss important facts related to these, basically on the below 2 aspects.

  1. Load Tests and result analysis. 
  2. Using Oracle JMC for performance monitoring.

Load Tests and Result Analysis


I will take WSO2 API Manager for examples to describe this topic. In the case of WSO2 API Manager, the performance can be elaborated using the factors such as below.
  1. Transactions Per Second (TPS)
  2. Response time (minimum/average/maximum)
  3. Error rate
Basically if the TPS is very low or if the average response time is very high,  or if the maximum response time is very high, or if the error rate is high, there is obviously an issue with the system, may be in the configurations in the APIM product or may be in the other systems interacting with APIM.


JMeter is a very convenient tool to generate a load to perform load tests. We can write JMeter scripts to run long running performance tests. In the case of WSO2 APIM, what we basically do is writing test scripts to call APIs published in the API Store.

Following is a simple JMeter test script, composed to test an API in the Store of WSO2 API Manager.

APIMSimpleTest.jmx



You can simply download this file and rename it to APIMSimpleTest.jmx and open it with JMeter if you want to play around with it.

Following are the basic items in this test script.
  1. Thread Group - "API Calling Thread Group"
    Following items exist within this test group.
    1. HTTP Request - "Get Menu"
    2. View Results Tree
    3. Summary Report
  2. HTTP Header Manager

Thread Group - "API Calling Thread Group"



 

  • Number of Threads (Users) : 2000
  • Ramp-up Period(in seconds) : 100
  • Scheduler configuration - Duration : 3600
This test runs for an hour (3600 seconds), with 2000 threads (simulates 2000 users). Ramp up period defines how long does it take to reach the defined thread count.



HTTP Request - "Get Menu"




The HTTP request made to call the API is defined by this. (HTTP Request Method & Path, Web Server Protocol, Server, Port number )



HTTP Header Manager




This sets the 2 headers  to the API call.



Analyzing JMeter test results


"View Results Tree" and "Summary Report" items under Thread group are added to view and analyze the test results. These are called "listeners" and they can be added to a thread group by Right Click on thread group> Add> Listner>

"View Results Tree" item facilitates viewing all the http requests made during the test and their responses. If you provide an empty file (with .jtl extension) to the "Write results to file/ Read from File>Filename" field, all the basic information on the http/https request will be written and saved into that file during the test.

"Summary Report" listener displays a summary of the test including Samples count, min/max/average response times, Error %, throughput, etc.

Note that you can use more listeners to analyze JMeter test results using .jtl file generated as mentioned above. 

It is not required to have these listeners in the test script run time to make analyzing reports. You can just add any listener later after the test and provide the .jtl file and generate an analysis graph, table, etc.

JMeter ships with very few listeners and if you want to add more listeners you can add them to the JMeter as plugins.

Many important plugins can be downloaded from here.

Links for some useful plugins are listed below.

After adding these plugins, you may see them under the Add> Listner list of listeners.

Quick Analysis report Generation can be done using a .jtl file. This will generate a complete analysis report (with graphs/tables) as an .html web page file. This is a very important and convenient feature in JMeter.

To generate a report, just run following single jmeter command.

./jmeter -g  <.jtl file> -o <output_dir_to_create_reports> 

This will display many graphs/tables including the below.
  1. Test and Report informations
  2. APDEX (Application Performance Index)
  3. Requests Summary
  4. Statistics per each thread - (average,min,max response times/throughput, error rates)
  5. Details descriptions on the errors occured
  6. Over Time based Charts
  7. Throughput related graphs
  8. Response times related graphs

Using Oracle Java Mission Control for performance monitoring


When we analyze the performance of a WSO2 product, it is important to analyze on the CPU functionality, Java Heap Memory, Threads, etc. Oracle Java Mission Control (JMC) is an ideal tool for this, which is shipped with Oracle Java.

Oracle Java Mission Control is a tool suite for managing, monitoring, profiling, and troubleshooting your Java applications. Oracle Java Mission Control has been included in standard Java SDK since version 7u40. JMC consists of the JMX Console and the Java Flight Recorder. [2]

If you start JMC in machine that a WSO2 product runs, you can find tremendous amount of information on its performance and functionality.

1. Using JMX Console


Java Mission Control (JMC) uses Java Management Extensions (JMX) to communicate with remote Java processes and the JMX Console is a tool in JMC for monitoring and managing a running JVM instance.

This tool presents live data about memory and CPU usage, garbage collections, thread activity, and more.

To use this to monitor WSO2 product's JVM start the JMC on the computer in which the product is running.  Under the JVM browser, you have to select the related JVM, "[1.x.x_xx]org.wso2.carbon.bootstrap.Bootstrap(xxxxx)".

Then right click on it and select "Start JMX console"


Now you can see the graphs, dashboards on Java heap memory, JVM CPU usage, etc. under the overview section.

The JMX console also consists of a live MBean browser, by which you can monitor and manage the MBeans related to the respective JVM.

In the case of WSO2 APIM, org.apache.synapse MBean will be useful to monitor the statistics related to API endpoints. Under the MBean tree org.apache.synapse>PassThroughLatencyView>nio_https_https, you can view average backend latency, average latency and many other feature attributes.


2. Using Java Flight Recorder


Java Flight Recorder is a profiling and event collection framework built into the Oracle JDK. This can be used to collect recordings and save into a file for later analysis.

Run JFR for a WSO2 product instance via JMC

To run JFR for a WSO2 product instance via JMC, for a fixed time period, follow the below steps.

  • Select the related JVM, "[1.x.x_xx]org.wso2.carbon.bootstrap.Bootstrap(xxxxx)".
  • Then right click on it and select "Start Flight Recording". You will be prompted whether to enable Java commercial features and click "Yes" for it.
  • Then provide a file name and location to dump the recording file.
  • Select "Time Fixed Recording" and provide the recording time you want and click "Finish".



  • After the provided time, the recording .jfr file will be saved in the given location. You can open it in JMC anytime to analyze the recordings.

Running JFR from Command Line

To run JFR for a WSO2 product instance via command line, execute the following commands in the computer where the instance is running.

>jcmd carbon VM.unlock_commercial_features

This will unlock commercial features for the WSO2 carbon JVM, that will enable running JFR. Note that this name "carbon" here can be replaced by any word part separated by period in the related jvm-representation name "org.wso2.carbon.bootstrap.Bootstrap"

>jcmd carbon JFR.start settings=profile duration=3600s name=FullPerfTest filename=recording-1.jfr

This command will start the JFR for the WSO2 product JVM for a duration of 3600 seconds, and the recording file will be dumped to the <WSO2_PRODUCT_HOME> with the name recording-1.jfr at the end of the provided time duration.

You can refer this[3] blog post to learn on JFR in detail.


References
[1] - http://searchsoftwarequality.techtarget.com/definition/performance-testing
[2] - https://www.prosysopc.com/blog/using-java-mission-control-for-performance-monitoring/
[3] - https://medium.com/@chrishantha/using-java-flight-recorder-2367c01deacf