Logo
  • Ubuntu
  • CentOS
  • Debian
  • Fedora
  • RedHat

How to Use Nginx as an HTTP Load Balancer in Linux - DesignLinux

May 29 2020
designlinux 0 Comments

When it comes to setting up multiple application servers for redundancy, load balancing is a commonly used mechanism for efficiently distributing incoming service requests or network traffic across a group of back-end servers.

Load balancing has several advantages including increased application availability through redundancy, increased reliability and scalability (more servers can be added in the mix when traffic increases). It also brings about improved application performance and many other benefits.

Recommended Read: The Ultimate Guide to Secure, Harden and Improve Performance of Nginx Web Server

Nginx can be deployed as an efficient HTTP load balancer to distribute incoming network traffic and workload among a group of application servers, in each case returning the response from the selected server to the appropriate client.

The load balancing methods supported by Nginx are:

  • round-robin – which distributes requests to the application servers in a round-robin fashion. It is used by default when no method is specified,
  • least-connected – assigns the next request to a less busy server(the server with the least number of active connections),
  • ip-hash – where a hash function is used to determine what server should be selected for the next request based on the client’s IP address. This method allows for session persistence (tie a client to a particular application server).

Besides, you can use server weights to influence Nginx load balancing algorithms at a more advanced level. Nginx also supports health checks to mark a server as failed (for a configurable amount of time, default is 10 seconds) if its response fails with an error, thus avoids picking that server for subsequent incoming requests for some time.

This practical guide shows how to use Nginx as an HTTP load balancer to distribute incoming client requests between two servers each having an instance of the same application.

For testing purposes, each application instance is labeled (on the user interface) to indicate the server it is running on.

Testing Environment Setup

Load Balancer: 192.168.58.7
Application server 1: 192.168.58.5
Application server 2: 192.168.58.8

On each application server, each application instance is configured to be accessed using the domain tecmintapp.lan. Assuming this is a fully-registered domain, we would add the following in the DNS settings.

A Record   		@   		192.168.58.7

This record tells client requests where the domain should direct to, in this case, the load balancer (192.168.58.7). The DNS A records only accept IPv4 values. Alternatively, the /etc/hosts file on the client machines can also be used for testing purposes, with the following entry.

192.168.58.7  	tecmintapp.lan

Setting Up Nginx Load Balancing in Linux

Before setting up Nginx load balancing, you must install Nginx on your server using the default package manager for your distribution as shown.

$ sudo apt install nginx   [On Debian/Ubuntu]
$ sudo yum install nginx   [On CentOS/RHEL]   

Next, create a server block file called /etc/nginx/conf.d/loadbalancer.conf (give a name of your choice).

$ sudo vi /etc/nginx/conf.d/loadbalancer.conf

Then copy and paste the following configuration into it. This configuration defaults to round-robin as no load balancing method is defined.

 
upstream backend {
        server 192.168.58.5;
        server 192.168.58.8;
    }
	
    server {
        listen      80 default_server;
        listen      [::]:80 default_server;
        server_name tecmintapp.lan;

        location / {
	        proxy_redirect      off;
	        proxy_set_header    X-Real-IP $remote_addr;
	        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
	        proxy_set_header    Host $http_host;
		proxy_pass http://backend;
	}
}

In the above configuration, the proxy_pass directive (which should be specified inside a location, / in this case) is used to pass a request to the HTTP proxied servers referenced using the word backend, in the upstream directive (used to define a group of servers). Also, the requests will be distributed between the servers using a weighted round-robin balancing mechanism.

To employ the least connection mechanism, use the following configuration

upstream backend {
        least_conn;
        server 192.168.58.5;
        server 192.168.58.8;
    }

And to enable ip_hash session persistence mechanism, use:

upstream backend {
	ip_hash;
        server 192.168.58.5;
        server 192.168.58.8;
    }

You can also influence the load balancing decision using server weights. Using the following configuration, if there are six requests from clients, the application server 192.168.58.5 will be assigned 4 requests and 2 will go 192.168.58.8.

upstream backend {
        server 192.168.58.5	weight=4;
        server 192.168.58.8;
    }

Save the file and exit it. Then ensure the Nginx configuration structure is correct after adding the recent changes, by running the following command.

$ sudo nginx -t

If the configuration is OK, restart and enable the Nginx service to apply the changes.

$ sudo systemctl restart nginx
$ sudo systemctl enable nginx

Testing Nginx Load Balancing in Linux

To test the Nginx load balancing, open a web browser and use the following address to navigate.

http://tecmintapp.lan

Once the website interface loads, take note of the application instance that has loaded. Then continuously refresh the page. At some point, the app should be loaded from the second server indicating load balancing.

Check Nginx Load Balancing in Linux

Check Nginx Load Balancing in Linux

You have just learned how to set up Nginx as an HTTP load balancer in Linux. We would like to know your thoughts about this guide, and especially about employing Nginx as a load balancer, via the feedback form below. For more information, see the Nginx documentation about using Nginx as an HTTP load balancer.

Sharing is Caring…
Share on FacebookShare on TwitterShare on LinkedinShare on Reddit

Related

Tags: Nginx Tips

Learn Python List Data Structure – Part 1

Prev Post

How to Install Odoo 13 in Ubuntu

Next Post
Archives
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • July 2022
  • June 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
Categories
  • AlmaLinux
  • Android
  • Ansible
  • Apache
  • Arch Linux
  • AWS
  • Backups
  • Bash Shell
  • Bodhi Linux
  • CentOS
  • CentOS Stream
  • Chef
  • Cloud Software
  • CMS
  • Commandline Tools
  • Control Panels
  • CouchDB
  • Data Recovery Tools
  • Databases
  • Debian
  • Deepin Linux
  • Desktops
  • Development Tools
  • Docker
  • Download Managers
  • Drupal
  • Editors
  • Elementary OS
  • Encryption Tools
  • Fedora
  • Firewalls
  • FreeBSD
  • FTP
  • GIMP
  • Git
  • Hadoop
  • HAProxy
  • Java
  • Jenkins
  • Joomla
  • Kali Linux
  • KDE
  • Kubernetes
  • KVM
  • Laravel
  • Let's Encrypt
  • LFCA
  • Linux Certifications
  • Linux Commands
  • Linux Desktop
  • Linux Distros
  • Linux IDE
  • Linux Mint
  • Linux Talks
  • Lubuntu
  • LXC
  • Mail Server
  • Manjaro
  • MariaDB
  • MongoDB
  • Monitoring Tools
  • MySQL
  • Network
  • Networking Commands
  • NFS
  • Nginx
  • Nodejs
  • NTP
  • Open Source
  • OpenSUSE
  • Oracle Linux
  • Package Managers
  • Pentoo
  • PHP
  • Podman
  • Postfix Mail Server
  • PostgreSQL
  • Python
  • Questions
  • RedHat
  • Redis Server
  • Rocky Linux
  • Security
  • Shell Scripting
  • SQLite
  • SSH
  • Storage
  • Suse
  • Terminals
  • Text Editors
  • Top Tools
  • Torrent Clients
  • Tutorial
  • Ubuntu
  • Udemy Courses
  • Uncategorized
  • VirtualBox
  • Virtualization
  • VMware
  • VPN
  • VSCode Editor
  • Web Browsers
  • Web Design
  • Web Hosting
  • Web Servers
  • Webmin
  • Windows
  • Windows Subsystem
  • WordPress
  • Zabbix
  • Zentyal
  • Zorin OS
Visits
  • 0
  • 1,547
  • 617,236

DesignLinux.com © All rights reserved

Go to mobile version