Feels like I’m jumping bit from topic to another but I had some testing ongoing with OCI Load Balancing service so thought to write a post on it. I’ll also throw few comparisons with AWS ELB so it gives an idea how Oracle has done it’s service.

In OCI the Load Balancing (LB) service is a regional service either public or private in your Virtual Cloud Network (VCN) depending what requirement you have for it and it manages either TCP or HTTP traffic.

If you need a public LB then the service will create two LB’s (primary and standby) in different Availability Domains (AD) so it will provide high availability. This means you will need to provide two different subnets from your VCN for the LB and they will get two different private IP’s. You can’t determine which one will be the primary.

The LB will also be assigned a floating public IP so in case one of the AD’s would go down the IP would be transferred to another LB where as in AWS the DNS name is linked to ELB IP addresses.

If you instead need a private LB this will be configured in one AD only and there will be a floating private IP assigned to the LB. If the AD would go down there is no fall back to another AD so that specific LB would go down.

Creating Load Balancer

For my test I created a public LB with three backend servers servicing http content from port 80. I used three servers so I could demonstrate some of the health check functionality Oracle has implemented.

lbr-oci-1
Creating the Load Balancer screen

In above LB creation screen you can see when I select to create a public LB I need to choose two public subnets which reside in different availability domains. These I had created beforehand.

Also when you create the LB you need to select correct shape for it. The options are 100Mbps, 400Mbps and 8Gbps so there is plenty of variety depending on your requirement and how much you are willing to pay!

The other components which are required for LB are:
Backend Set which is grouping of backend servers and health check and load balancing policy defined
Backend Servers which are the destination of your LB routing
Listener to listen incoming traffic on specific port and route it to a backend set

Like said in the backend set definition you have a health check policy. You will define what path your healthcheck URL (for example /health.html in case of http) resides and in which interval it is polled on. Also things like status code of response are defined here.

You will also define the load balancing policy, whether its round robin, IP hash or least connection. All these are described in the documentation.

Next when you define the actual backend servers with ports is my biggest UI related gripe in the whole service. You either need to know the OCID or IP of the server and you have a nice link to view the instances. But you can’t copy the IP or OCID from anywhere! This could be so easy if you could just have a drop down  where to choose the servers from.

lbr-oci-2
Type the OCID which you have of course saved beforehand to a text file here.

Good help when creating the backend servers is the checkbox to help system create proper security list rules so your servers can be accessed.

Once you have added servers you will have in your backend set you can continue on to the listener. In the listener you will define port it is listening on and related backend set. If you need to you can optionally create a path route set which tells listener which request is routed to which server.

That’s it almost! You still need to edit your security lists so traffic is allowed to backend servers (if you didn’t do this already when creating the backend servers) and that the traffic is allowed to and from to the listener. This is what documentation says about security lists:

To enable backend traffic, your backend server subnets must have appropriate ingress and egress rules in their security lists. When you add backend servers to a backend set, the Load Balancing service Console can suggest rules for you, or you can create your own rules using the Networking service.

And for the listener be sure to checkout this.

lbr-oci-4
Everything has been setup and health of servers is OK

Each part of LB service has different health categories but I wanted to look more on backend set during this test.  The health is divided into following categories OK, WARNING, CRITICAL, UNKNOWN.

If everything checks out fine then status is OK.

If more than one but less than half shows up as CRITICAL, WARNING or UNKNOWN then status is WARNING.

If more than half shows up as CRITICAL, WARNING or UNKNOWN then status is CRITICAL.

And if following conditions are met then status is UNKNOWN: More than half shows up as UNKNOWN, listener is not properly configured or the system could not retrieve metrics.

All this is explained here.

lbr-oci-5
Example when health changes to WARNING when less than half of servers are CRITICAL

Summary

If I would say anything about creation of OCI LB compared to AWS ELB then creating ELB has similar components but the AWS UI is bit more user friendly.

With OCI you need to know all components you have to update where as in AWS you go one step forward all the time. All this goes away if you use some orchestration tool of course.

You should also consider how many actual load balancers you need vs if you create one load balancer with multiple listeners. You can also use SSL certificates with your LB and terminate the connection to the LB, use backend SSL or use end-to-end SSL based on your requirements.

Finally link to OCI Load Balancing Service documentation.

Leave a Reply

Your email address will not be published.