目录
Nginx Cookbook
Installation
On Ubuntu
sudo apt update
sudo apt install nginx -y
On CentOS
sudo yum update
sudo yum install epel-release
sudo yum install nginx -y
Check the version
nginx -v
Configuration Directory
ls /etc/nginx
The output should be like this:
conf.d fastcgi_params koi-win modules-available nginx.conf scgi_params sites-enabled uwsgi_params
fastcgi.conf koi-utf mime.types modules-enabled proxy_params sites-available snippets win-utf
The most important files and directories are:
File/Directory | Description |
---|---|
nginx.conf |
Main Nginx configuration file containing global settings and directives. |
sites-available |
Directory for storing configuration files for individual sites (virtual hosts). |
sites-enabled |
Directory containing symbolic links to enabled site configuration files from sites-available . |
conf.d |
Used for additional configuration files that are included automatically, such as global settings and services. |
Why there are conf.d
, sites-available
, and sites-enabled
directories?
sites-available
andsites-enabled
: Primarily used to manage individual site configurations. You create site configurations insites-available
and enable them by linking tosites-enabled
.conf.d
: Used for other configurations that are not site-specific, such as load balancing, SSL settings, logging configurations, or additional server blocks that apply to multiple sites.
Starting NGINX
sudo systemctl start nginx
sudo systemctl enable nginx
Default Configuration
There’s a default config at /etc/nginx/sites-available
& /etc/nginx/sites-enabled
that serves files from /var/www/html
.
So you can access the default page by visiting http://your_server_ip
.
Real Case
I have two domains foo.com and bar.com, managed on namecheap, they are different websites, but I only have one server, the server IP is $public_ip how can I bind the two domains to the server? And I have following requirements:
- The website is used to host webpages
- www.foo.com and www.bar.com should redirect to foo.com & bar.com, but do not influence SEO.
- First setup the nginx to use HTTP as a POC
- Then secured by TLS, the TLS should auto-renew before it expires. Check the auto-renew is setup without problem.
- If the user visits by HTTP, redirect to HTTPS.
- How to remove bar.com if I don’t want it anymore, and remove the related TLS (and auto-renew)
- If I have an API service run at localhost:8080 and I want it to be exposed at api.foo.com, how to achieve that?
- If I want to add an access key and rate limit to api.foo.com, how to do that?
- If I have same backend service run at localhost:9090, and I want to load balance the two services, how to do that?
- How to monitor the access log of the server?
In summary, it’s a real case that covers most of the common configurations of Nginx:
- Serving static files
- Serving multiple sites in one server
- Adding SSL/TLS & auto-renew
- Remove a site
- Reverse proxy
- Load balancing
- Access control
- Rate limiting
- Logging
DNS Settings
On Namecheap, set the following DNS records for both domains:
- A Record for foo.com & www.foo.com pointing to $public_ip
- A Record for bar.com & www.bar.com pointing to $public_ip
Basic NGINX Configuration
Create configuration files for foo.com
and bar.com
.foo.com
Configuration Create a new NGINX server block for foo.com
:
vi /etc/nginx/sites-available/foo.com
Add the following configuration:
server {
listen 80;
server_name foo.com www.foo.com;
root /var/www/foo.com;
index index.html;
location / {
try_files $uri $uri/ =404;
}
# Redirect www.foo.com to foo.com
if ($host = 'www.foo.com') {
return 301 http://foo.com$request_uri;
}
}
Create the directory and a test HTML file:
mkdir -p /var/www/foo.com
echo '<h1>Welcome to foo.com!</h1>' > /var/www/foo.com/index.html
Enable the configuration:
ln -s /etc/nginx/sites-available/foo.com /etc/nginx/sites-enabled/
bar.com
ConfigurationCreate a new NGINX server block for bar.com
:
vi /etc/nginx/sites-available/bar.com
Add the following configuration:
server {
listen 80;
server_name bar.com www.bar.com;
root /var/www/bar.com;
index index.html;
location / {
try_files $uri $uri/ =404;
}
# Redirect www.bar.com to bar.com
if ($host = 'www.bar.com') {
return 301 http://bar.com$request_uri;
}
}
Create the directory and a test HTML file:
mkdir -p /var/www/bar.com
echo '<h1>Welcome to bar.com!</h1>' > /var/www/bar.com/index.html
Enable the configuration:
ln -s /etc/nginx/sites-available/bar.com /etc/nginx/sites-enabled/
Restart NGINX:
systemctl restart nginx
Verify that the sites are working by curl http://foo.com
and curl http://bar.com
.
Set Up TLS with Let’s Encrypt
Install Certbot and the NGINX plugin:
sudo apt install certbot python3-certbot-nginx -y
Obtain certificates for foo.com
and bar.com
:
certbot --nginx -d foo.com -d www.foo.com
certbot --nginx -d bar.com -d www.bar.com
Certbot will automatically update your NGINX configuration to use HTTPS and reload. The configurations will look like this:
server {
server_name foo.com www.foo.com;
root /var/www/foo.com;
index index.html;
location / {
try_files $uri $uri/ =404;
}
# Redirect www.foo.com to foo.com
if ($host = 'www.foo.com') {
return 301 http://foo.com$request_uri;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/foo.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/foo.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.foo.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = foo.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name foo.com www.foo.com;
return 404; # managed by Certbot
}
Verify that the TLS are working by https://foo.com
and https://bar.com
.
Auto-Renewal Details
Certbot actually add a timer to the system, you can check it by:
systemctl list-timers
or
systemctl status certbot.timer
And the corresponding service file:
cat /lib/systemd/system/certbot.service
[Unit]
Description=Certbot
Documentation=file:///usr/share/doc/python-certbot-doc/html/index.html
Documentation=https://certbot.eff.org/docs
[Service]
Type=oneshot
ExecStart=/usr/bin/certbot -q renew
PrivateTmp=true
Certbot do renewal by read the config file from /etc/letsencrypt/renewal/
.
Auto-Renewal Hook
Although, Certbot take care of the auto-renewal, but it don’t reload the certficates for nginx.
We need add a hook to do some extra work when the renewal is done.
There are three directories for hooks:
- pre: Scripts that run before the renewal process.
- deploy: Scripts that run after a successful renewal.
- post: Scripts that run after the entire renewal process, regardless of whether any certificates were renewed.
We create a new script /etc/letsencrypt/renewal-hooks/deploy/reload-nginx.sh
:
#!/bin/bash
systemctl reload nginx
Make it executable:
chmod +x /etc/letsencrypt/renewal-hooks/deploy/reload-nginx.sh
Test the hook script to ensure it works correctly:
sudo /etc/letsencrypt/renewal-hooks/deploy/reload-nginx.sh
Now, when the certificates are renewed, NGINX will be reloaded automatically.
Simulate the Auto-Renewal Process
Certbot’s auto-renewal is set up by default. You can check the renewal process with:
certbot renew --dry-run
Reverse Proxy For API Service
DNS Settings: Add an A Record for api.foo.com
pointing to $public_ip.
Create a new configuration file:
vi /etc/nginx/sites-available/api.foo.com
server {
listen 80;
server_name api.foo.com;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Enable the site and reload NGINX:
sudo ln -s /etc/nginx/sites-available/api.foo.com /etc/nginx/sites-enabled/
sudo systemctl reload nginx
Prepare an API service in python:
import http.server
import json
import socketserver
import argparse
# Define the handler to respond with JSON
class MyHandler(http.server.SimpleHTTPRequestHandler):
def do_GET(self):
# Check the requested path
if self.path == '/health':
# Define the health check response
response_text = 'ok'
# Send response status code
self.send_response(200)
# Send headers
self.send_header('Content-type', 'text/plain')
self.end_headers()
# Write content as utf-8 data
self.wfile.write(response_text.encode('utf-8'))
else:
# Define the response
response = {
'status': 'success',
'message': 'This is a dummy JSON response'
}
# Convert the response to a JSON string
response_text = json.dumps(response)
# Send response status code
self.send_response(200)
# Send headers
self.send_header('Content-type', 'application/json')
self.end_headers()
# Write content as utf-8 data
self.wfile.write(response_text.encode('utf-8'))
# Parse command-line arguments
parser = argparse.ArgumentParser(description='Simple Python API Server')
parser.add_argument('--port', type=int, default=8080, help='Port to run the server on (default: 8080)')
args = parser.parse_args()
# Define the port
PORT = args.port
# Create the server object
with socketserver.TCPServer(("", PORT), MyHandler) as httpd:
print(f"Serving at port {PORT}")
# Serve until the process is interrupted
httpd.serve_forever()
Run the API service:
python3 api.py --port 8080 &
python3 api.py --port 9090 &
Test the API service by curl http://localhost:8080
and curl http://localhost:9090
.
Test the reverse proxy by curl http://api.foo.com
.
Load Balancing
Update the configuration file /etc/nginx/sites-available/api.foo.com
, set upstream servers in the configuration file. The upstream
directive defines a group of servers that can be referenced by the proxy_pass
directive.
upstream backend {
server localhost:8080;
server localhost:9090;
}
server {
listen 80;
server_name api.foo.com;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Enable the site and reload NGINX:
sudo ln -s /etc/nginx/sites-available/api.foo.com /etc/nginx/sites-enabled/
sudo systemctl reload nginx
Other Load Balancing Methods
By weight:
upstream backend {
server localhost:8080 weight=3;
server localhost:9090 weight=1;
}
By IP hash:
upstream backend {
ip_hash; # Use IP hash for load balancing
server localhost:8080;
server localhost:9090;
}
Access Control
Add the following configuration to validate the Authorization
header:
map $http_authorization $auth_token {
default "";
"your_access_key" "valid";
}
upstream backend {
server localhost:8080;
server localhost:9090;
}
server {
listen 80;
server_name api.foo.com;
location / {
if ($auth_token != "valid") {
return 401;
}
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Test the configuration by nginx -t
.
Reload NGINX by systemctl reload nginx
.
This configuration does the following:
- Checks the
Authorization
header against the valueyour_access_key
. -
If the token is valid, the request is proxied to the backend service running on
localhost:8080
. -
If the token is invalid, the request returns a
401 Unauthorized
response.
Test with curl
curl -H "Authorization: your_access_key" http://api.foo.com
Rate Limiting
Add rate limiting to the configuration:
# Define the rate limiting zone
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s;
map $http_authorization $auth_token {
default "";
"your_access_key" "valid";
}
upstream backend {
server localhost:8080;
server localhost:9090;
}
server {
listen 80;
server_name api.foo.com;
location / {
# Apply rate limiting and set the limit exceeded status code to 429
limit_req zone=mylimit burst=5 nodelay;
limit_req_status 429;
if ($auth_token != "valid") {
return 401;
}
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s;
$binary_remote_addr
:
- This variable represents the client’s IP address in a binary format. Using the binary format helps save memory space compared to the textual representation.
-
This is used as a key to track each client’s request rate separately.
zone=mylimit:10m
:
-
zone=mylimit
: Defines a shared memory zone namedmylimit
. -
10m
: Allocates 10 megabytes of memory for the zone. This memory is used to store the state of the clients (their IP addresses and the number of requests they have made). -
The amount of memory allocated determines how many unique clients can be tracked. With 10MB, NGINX can track a large number of clients.
rate=1r/s
:
-
Sets the average rate limit to 1 request per second.
-
This means each client (identified by their IP address) is allowed to make 1 request per second on average.
limit_req zone=mylimit burst=5 nodelay;
limit_req zone=mylimit
:
- Applies the rate limiting defined in the
mylimit
zone created by thelimit_req_zone
directive.
burst=5
:
-
Allows a client to exceed the rate limit temporarily by up to 5 additional requests.
-
The burst parameter specifies the maximum number of requests that can be queued (or burst) above the rate limit.
-
With
burst=5
, a client can make up to 6 requests in quick succession before being throttled.
nodelay
:
-
By default, NGINX delays requests that exceed the rate limit within the burst limit to smooth out traffic spikes.
-
The
nodelay
parameter disables this behavior, meaning that all 5 burst requests can be processed immediately without any delay. -
Once the burst limit is exceeded, any further requests will be rejected or delayed according to the rate limit.
Example Scenario
-
A client with IP address
192.168.1.1
can make 1 request per second. -
If this client makes 6 requests within a second:
- The first request is allowed immediately.
-
The next 5 requests are allowed immediately because of the
burst=5
setting. -
If the client makes a 7th request within the same second, it will be rejected with a 429 status code due to the rate limit being exceeded.
This configuration allows you to handle temporary traffic spikes gracefully by permitting short bursts of requests while enforcing an average rate limit to prevent sustained high traffic from overwhelming your backend servers.
Test the Rate Limiting
Reload NGINX:
sudo systemctl reload nginx
Test with curl:
for i in {1..15}; do curl -H "Authorization: your_access_key" http://api.foo.com/ -w "\nHTTP Status: %{http_code}\n"; sleep 0.1; done
Monitoring and Logging
Modify /etc/nginx/nginx.conf
:
http {
...
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
...
}
Reload NGINX:
sudo systemctl reload nginx
最新评论