Laravel Forge server not accepting AWS ELB Health Check request

302 views Asked by At

Our app was running on a single server on Laravel Forge, but we had to recently add another one, so we did it and put both of them together on a new AWS ELB, we preferred that over Forge's ELB solution for scalability purposes. Everything is working fine, however the health-check is failing on the ELB, marking both targets as unhealthy. Even though that does not cause any issues, I'd like to fix that so I can use monitors like Cloudwatch for example. I've logged into one server to see the logs and this is what I've found: nginx is returning HTTP 444 to the health check requests:

/var/log/nginx# tail -f access.log
[20/Jun/2023:23:24:23 +0000] "GET /health-check HTTP/1.1" 444 0 "-" "ELB-HealthChecker/2.0"

And this is my current nginx file:

# FORGE CONFIG (DO NOT REMOVE!)
include forge-conf/mywebsite.com/before/*;

server {
    listen 80;
    listen [::]:80;
    server_name mywebsite.com;
    server_tokens off;
    root /home/forge/mywebsite.com/current/public;

    # FORGE SSL (DO NOT REMOVE!)
    # ssl_certificate;
    # ssl_certificate_key;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;
    ssl_dhparam /etc/nginx/dhparams.pem;

    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Content-Type-Options "nosniff";

    index index.html index.htm index.php;

    charset utf-8;

    # FORGE CONFIG (DO NOT REMOVE!)
    include forge-conf/mywebsite.com/server/*;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location = /favicon.ico { access_log off; log_not_found off; }
    location = /robots.txt  { access_log off; log_not_found off; }

    access_log off;
    error_log  /var/log/nginx/mywebsite.com-error.log error;

    error_page 404 /index.php;

    location ~ \.php$ {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
        fastcgi_index index.php;
        include fastcgi_params;
    }

    location ~ /\.(?!well-known).* {
        deny all;
    }
}

# FORGE CONFIG (DO NOT REMOVE!)
include forge-conf/mywebsite.com/after/*;

So far I've tried adding this at the start of the nginx file but no luck:

location /health-check {
    access_log off;
    return 200;
}

Also tried to add 444 as an acceptable response code on the ELB health check but it didn't like it, neither did I tbh.

2

There are 2 answers

0
Rémi Pelhate On

I'm having the same issue but using a DigitalOcean LB. Although I haven't solved it yet, I did find the cause. When provisioning a server using Forge, the Nginx config for the site automatically configures the server_name to match the domain (or public IP of the server when using the "default" site).

DigitalOceans LB on the other hand connects to the server using its private IP on the VPC network. Since that private IP does not match the server_name in the Nginx config of your site, it falls back to the catch-all Nginx config. If you look at the contents of /etc/nginx/sites-enabled/000-catch-all on your server, you'll see something like this:

server {
    listen 80 default_server;
    listen [::]:80 default_server;
    listen 443 ssl http2 default_server;
    listen [::]:443 ssl http2 default_server;
    server_name _;
    server_tokens off;

    ssl_certificate /etc/nginx/ssl/catch-all.invalid.crt;
    ssl_certificate_key /etc/nginx/ssl/catch-all.invalid.key;

    # Some SSL config...

    return 444;
}

That's where the 444 error comes from. And I'm guessing this might also be the issue on ELB.

We could add that health check location block in the catch-all config, but it doesn't feel right. Mostly because you would have to replicate this on all servers. But also, this would only fix the health check. Incoming traffic on the LB will still delegate the request to the server using the private IP, so those requests would still fail.

So we need to make sure that traffic from the LB matches Nginx config of the site. I have tried setting the server's private IP as server_name, but it does not work. Any idea?

0
weotch On

What worked for me on DigitalOcean was to edit my Nginx config to log the http host of the requests that the LB was sending to my server. For instance, with:

log_format hosts_log 'Host: $host';
access_log /var/log/nginx/hosts.log hosts_log;

I just pasted this into the top of the "Edit Nginx Configuration" window that opens from the "Edit Files" menu within the default site.

Then, using what I noticed in there, I edited the Site Settings > Domain > Aliases in Forge and added a comma delimited list containing the LB's IP address, the internal IP of the server, and then the hostname that was used by the LB's health check service:

Forge domain settings

I think this makes the default Nginx vhost accept requests for all of those hostnames. The same approach should work with AWS: log the hostname that are hitting the server and then add those as aliases of the Site.