于Windows系统部署Discourse的过程

想法产生

数周之前,产生一个搭建论坛的想法,于是找到个很美观、扩展性高,且功能完善的论坛系统——Discourse,想要在1个搭载Windows Server 2022的服务器部署。

显而易见,它便是此论坛——“EchoNet”

部署

购买域名,将域名部署至Cloudflare,然后开始查看Discourse的官方安装教程。

邮件服务

好的,首先需要一个邮件服务;Discourse团队推荐了一些“事务邮件平台”,不过,我不太喜欢那些有限制的邮件服务——每月的免费邮件发送数量限制。于是转向研究本地搭建,最终选择了Stalwart这一本地邮件服务。除了没有网页邮箱外(自己准备一个邮件客户端即可,比如Thunderbird),它的功能十分完善,且使用Rust编写,性能很不错。

按照Stalwart的教程配置邮件服务,设置DNS记录之类无需赘述,查看文档便可,文档很详细。另外,又用Windows的守护进程管理工具NSSM,让Stalwart作为服务运行。

部署Discourse

接下来我不再参考官方教程,因为官方只支持Docker部署,但我没有办法在这个服务器上借助WSL运行Docker。另外,由于之前已经在Windows上用Nginx和PostgreSQL搭建其它服务,不想运行第2个PostgreSQL数据库。

可能描述不清楚,我简单说下目前的服务器环境:

  • 数据库:PostgreSQL,运行于原生Windows。
  • 反向代理:Nginx,运行于原生Windows与WSL,WSL处用于提供静态资源。
  • 缓存:Redis,运行于WSL。
  • 论坛系统:Discourse,运行于WSL。

安装WSL

好,现在安装Ubuntu即可。

wsl --install =WSL发行版=

等待其安装完成,设置好用户,登录它。照例更新所有软件包:

sudo apt update && sudo apt upgrade -y

配置网络

默认情况下,WSL的网络是NAT模式,而且每次重启都会变化。为了让Windows可以访问WSL,需要做些配置。NAT的配置比较麻烦;当然,如果使用Windows 11或Windows Server 2025,则可以开启“镜像(Mirrored)”模式,就可以直接用localhost连接WSL了。

创建一个.bat文件,写入以下内容:

wsl -d =WSL发行版= -u root ip addr del $(ip addr show eth0 ^| grep 'inet\b' ^| awk '{print $2}' ^| head -n 1) dev eth0
wsl -d =WSL发行版= -u root ip addr add =WSL内网IP地址=/24 broadcast =广播地址= dev eth0
wsl -d =WSL发行版= -u root ip route add 0.0.0.0/0 via =Windows内网IP地址= dev eth0
wsl -d =WSL发行版= -u root echo nameserver =Windows内网IP地址= ^> /etc/resolv.conf
powershell -c "Get-NetAdapter 'vEthernet (WSL)' | Get-NetIPAddress | Remove-NetIPAddress -Confirm:$False; New-NetIPAddress -IPAddress =Windows内网IP地址= -PrefixLength 24 -InterfaceAlias 'vEthernet (WSL)'; Get-NetNat | ? Name -Eq WSLNat | Remove-NetNat -Confirm:$False; New-NetNat -Name WSLNat -InternalIPInterfaceAddressPrefix =网关IP地址=/24;"
pause

以上脚本会删除原本分配的WSL IP地址,然后添加虚拟NAT网关,在其中为WSL和Windows重新配置内网。需要在每次Windows重启后重新执行。

安装必要依赖

Discourse需要依赖以下软件:

  • Git
  • PostgreSQL
  • Redis
  • Ruby(ruby-build
  • Node.js(NPM、PNPM)
  • ImageMagick

首先安装Git:

sudo apt install git

然后是Redis。因为APT仓库的Redis版本较旧,因此从官方仓库安装最新的Redis:

sudo apt install lsb-release curl gpg
curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg
sudo chmod 644 /usr/share/keyrings/redis-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list
sudo apt update
sudo apt install redis -y

如果你想要在WSL中使用PostgreSQL,也可以从官方仓库安装:

sudo apt install ca-certificates
sudo install -d /usr/share/postgresql-common/pgdg
sudo curl -o /usr/share/postgresql-common/pgdg/apt.postgresql.org.asc --fail https://www.postgresql.org/media/keys/ACCC4CF8.asc
. /etc/os-release
sudo sh -c "echo 'deb [signed-by=/usr/share/postgresql-common/pgdg/apt.postgresql.org.asc] https://apt.postgresql.org/pub/repos/apt $VERSION_CODENAME-pgdg main' > /etc/apt/sources.list.d/pgdg.list"
sudo apt update
sudo apt install postgresql -y

因为WSL发起网络连接的IP地址范围为192.168.0.0/16,所以要配置PostgreSQL的pg_hba.conf文件(我这里的文件是C:\Program Files\PostgreSQL\18\data\pg_hba.conf)来允许WSL访问。如果用“镜像”网络模式,或者PostgreSQL安装在WSL中,就不用配置。找到文件后,在底部添加以下内容:

host all discourse 192.168.0.0/16 scram-sha-256

它允许IP地址范围192.168.0.0/16登录用户discourse,并访问所有数据库。记得添加1个新用户discourse

注意: Windows的官方PostgreSQL安装没有内置pgvector向量相似度查询插件,但Discourse插件discourse-ai会用它来做向量数据库,记得按照说明编译并安装一下。不然之后Discourse初始化数据库时会失败。

Linux系统的PostgreSQL默认就有上述插件,无需自行编译。

再从官方仓库安装Nginx:

sudo apt install gnupg2 ubuntu-keyring
curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] http://nginx.org/packages/ubuntu `lsb_release -cs` nginx" | sudo tee /etc/apt/sources.list.d/nginx.list
echo -e "Package: *\nPin: origin nginx.org\nPin: release o=nginx\nPin-Priority: 900\n" | sudo tee /etc/apt/preferences.d/99nginx
sudo apt update
sudo apt install nginx -y

使用NVM(一个Node.js版本管理工具)来安装最新的Node.js版本(目前为24.9.0):

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash
\. "$HOME/.nvm/nvm.sh"
nvm install 24
corepack enable pnpm
pnpm -v

然后安装rbenv(也是一个Ruby的版本管理工具)。

git clone https://github.com/rbenv/rbenv.git ~/.rbenv
~/.rbenv/bin/rbenv init
source ~/.bashrc

为了编译Ruby,则需要使用ruby-build。此处将其作为rbenv的插件安装:

git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build

还有必要的编译依赖:

sudo apt install build-essential libxslt1-dev libcurl4-openssl-dev libksba8 libksba-dev libreadline-dev libssl-dev zlib1g-dev libsnappy-dev libyaml-dev libsqlite3-dev sqlite3 postgresql-server-dev-all postgresql-contrib libpq-dev brotli -y

然后就可以编译并安装Ruby了:

# 查看Ruby的稳定版本
rbenv install -l
# 安装对应的Ruby版本并设置全局
rbenv install =Ruby版本= -v
rbenv global =Ruby版本=
rbenv rehash

最后编译并安装ImageMagick,Discourse用它来处理图片(虽然,很奇怪的是,为什么Discourse不用Ruby的图像库呢?)。

从以下链接找到最新版本号。

# 安装依赖项
sudo apt install autoconf curl g++ yasm cmake libde265-0 libde265-dev libjpeg-turbo8 libjpeg-turbo8-dev libwebp7 x265 libx265-dev libtool libpng16-16t64 libpng-dev libwebp-dev libgomp1 libaom-dev libwebpmux3 libwebpdemux2 ghostscript libxml2-dev libxml2-utils librsvg2-dev libltdl-dev libbz2-dev gsfonts libtiff-dev libfreetype-dev libjpeg-dev libheif1 libheif-dev -y
# 下载源代码并编译
cd ~/
wget -O ImageMagick.tar.gz "https://github.com/ImageMagick/ImageMagick/archive/refs/tags/=ImageMagick版本=.tar.gz"
tar zxf ImageMagick.tar.gz
cd ImageMagick-=ImageMagick版本=
./configure --disable-shared --enable-delegate-build --enable-static --enable-bounds-checking --enable-hdri --enable-hugepages --with-threads --with-modules --with-quantum-depth=16 --without-magick-plus-plus --with-bzlib --with-zlib --without-autotrace --with-freetype --with-jpeg --without-lcms --with-lzma --with-png  --with-tiff --with-heic --with-rsvg --with-webp
make all -j"$(nproc)"
sudo make install
sudo ldconfig /usr/local/lib
cd ../
rm ImageMagick.tar.gz
rm -rf ImageMagick-=ImageMagick版本=

再安装一些其它的图像处理程序:

sudo apt install advancecomp gifsicle jpegoptim libjpeg-progs optipng pngcrush pngquant jhead -y

从以下链接找到最新的OxiPNG版本,然后安装:

cd ~
wget https://github.com/oxipng/oxipng/releases/download/v=OxiPNG版本=/oxipng-=OxiPNG版本=-x86_64-unknown-linux-musl.tar.gz
tar -xzvf oxipng-=OxiPNG版本=-x86_64-unknown-linux-musl.tar.gz
sudo cp oxipng-=OxiPNG版本=-x86_64-unknown-linux-musl/oxipng /usr/local/bin
rm oxipng-=OxiPNG版本=-x86_64-unknown-linux-musl.tar.gz
rm -rf oxipng-=OxiPNG版本=-x86_64-unknown-linux-musl

部署Discourse

克隆Discourse仓库:

cd ~
git clone https://github.com/discourse/discourse.git
cd discourse

安装Discourse所需的Gem和前端库:

gem update --system
gem install rails bundler
bundle install
pnpm install

然后编辑Discourse的配置文件config/discourse.conf

# 首先复制默认配置
cp config/discourse_defaults.conf config/discourse.conf

打开config/discourse.conf,修改以下内容:

db_host = "=数据库主机IP地址="
db_port = =数据库端口=
db_name = discourse
db_user = discourse
db_password = "=数据库用户密码="

redis_host = "=Redis主机IP地址="
redis_port = =Redis端口=
redis_db = 0
redis_username =
redis_password =

hostname = "=网站域名="
enable_cors = true

smtp_address = "=SMTP主机IP地址="
smtp_port = =SMTP端口=
smtp_domain = "=SMTP域名="
smtp_user_name = "=SMTP用户名="
notification_email = "=通知邮箱="
smtp_password = "=SMTP用户密码="
# 开启SMTP STARTTLS。
smtp_enable_start_tls = true
# 开启SMTPS,与上个配置互斥。
smtp_force_tls = false

developer_emails = "=初始的管理员邮箱="

# 开启YJIT以更高的内存使用率为代价来提高性能,
yjit_enabled = false

load_mini_profiler = false

# 关闭请求速率限制
max_reqs_per_ip_mode = none
max_reqs_rate_limit_on_private = false

请暂时不要打开YJIT,我在Ruby的3.4.6版本运行Discourse时,Puma会在Sidekiq打开一段时间后抛出错误undefined method 'wait_readable' for nil。我找到了Puma仓库的这个提议,这基本上就是Ruby自己的问题。

通过环境变量,让Rails以生产模式运行:

export RAILS_ENV=production

运行以下命令来初始化数据库(所以才要先配置数据库):

bundle exec rake db:create

我选择用systemd管理Discourse,所以创建一个/etc/systemd/system/discourse.service文件:

[Unit]
Description=Discourse
After=network.target redis.service
Requires=redis.service

[Service]
Type=simple
User==用户名称=
WorkingDirectory=/home/=用户名称=/discourse
ExecStart=/home/=用户名称=/.rbenv/shims/bundle exec puma -C /home/=用户名称=/discourse/config/puma.rb
ExecReload=/home/=用户名称=/.rbenv/shims/bundle exec pumactl restart
Restart=on-failure
RestartSec=5s
Environment=RAILS_ENV=production
Environment=HOME=/home/=用户名称=
Environment=PUMA_LISTENER==Puma监听地址与端口=
Environment=PUMA_THREADS==Puma线程池=
Environment=PUMA_WORKERS==Puma进程数=
Environment=PUMA_SIDEKIQS==Sidekiq进程数=
Environment=UNICORN_SIDEKIQ_MAX_RSS==Sidekiq内存限制=
Environment=DISCOURSE_ENABLE_EMAIL_SYNC_DEMON==开启Discourse邮件同步=
ProtectSystem=full

[Install]
WantedBy=multi-user.target

然后开启它,但不要启动:

sudo systemctl daemon-reload
sudo systemctl enable discourse
sudo systemctl stop discourse

创建一个.sh文件,用于以后更新Discourse:

#!/bin/bash

export RAILS_ENV=production

cd ~/discourse

git stash
git pull
git checkout tests-passed

# 更新插件
cd plugins/
for p in *
do
    if [ -d "$dir/.git" ]
    then
        echo $p
        cd ${p}/
        git pull
        cd ../
    fi
done
cd ../

bundle install
pnpm install

# 从第三方仓库下载IP数据库
curl -L "https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-City.mmdb" -o vendor/data/GeoLite2-City.mmdb
curl -L "https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-ASN.mmdb" -o vendor/data/GeoLite2-ASN.mmdb

bundle exec rake db:migrate
bundle exec rake themes:update
bundle exec rake assets:precompile

git stash pop

# 使用S3对象存储时取消注释
# bundle exec rake s3:upload_assets
# bundle exec rake s3:expire_missing_assets

sudo systemctl restart discourse

此处用Puma开启本地网页服务器。不知道为什么,Discourse官方没有给Puma编写最新的配置文件(原本用的是Unicorn),只得根据unicorn.conf.rb自行编写Puma的配置文件。将以下内容写入config/puma.rb中:

# frozen_string_literal: true

# 重要环境变量:
# RAILS_ENV=production:RAILS环境,生产(production)或开发(development)。
# PUMA_WORKERS=4:进程数量。
# PUMA_THREADS=8:16:线程数量的最大值与最小值。
# PUMA_TIMEOUT=60:线程超时秒数。
# PUMA_SIDEKIQS=0:Sidekiq进程数。
# PUMA_LISTENER:监听端点,优先于PUMA_BIND_ALL与PUMA_PORT。
# PUMA_BIND_ALL=0:监听所有地址。
# PUMA_PORT=3000:监听端口。
# DISCOURSE_ENABLE_EMAIL_SYNC_DEMON=false:开启Discourse邮件同步。

app_root = File.expand_path(File.expand_path(File.dirname(__FILE__)) + "/../")
ENV["RAILS_ROOT"] ||= app_root

directory app_root

enable_logstash_logger = ENV["ENABLE_LOGSTASH_LOGGER"] == "1"
puma_stderr_path = "#{app_root}/log/puma.stderr.log"
puma_stdout_path = "#{app_root}/log/puma.stdout.log"
sidekiq_log_path = "#{app_root}/log/sidekiq.log"
email_sync_log_path = "#{app_root}/log/email_sync.log"

FileUtils.mkdir_p("#{app_root}/tmp/pids") unless File.exist?("#{app_root}/tmp/pids")
FileUtils.touch(puma_stderr_path) unless File.exist?(puma_stderr_path)
FileUtils.touch(puma_stdout_path) unless File.exist?(puma_stdout_path)
FileUtils.touch(sidekiq_log_path) unless File.exist?(sidekiq_log_path)
FileUtils.touch(email_sync_log_path) unless File.exist?(email_sync_log_path)

if enable_logstash_logger
    require_relative "../lib/discourse_logstash_logger"
    FileUtils.touch(puma_stderr_path) unless File.exist?(puma_stderr_path)
else
    stdout_redirect puma_stdout_path, puma_stderr_path, true
end

environment ENV.fetch("RAILS_ENV") { "production" }

workers (ENV["PUMA_WORKERS"] || 4).to_i

threads_count = ENV.fetch("PUMA_THREADS") { "8:16" }
threads threads_count.split(":").first.to_i, threads_count.split(":").last.to_i

preload_app!

bind_address = ENV["PUMA_LISTENER"] || "#{(ENV["PUMA_BIND_ALL"] ? "0.0.0.0:" : "127.0.0.1:")}#{(ENV["PUMA_PORT"] || 3000).to_i}"

if bind_address.start_with?("/") || bind_address.start_with?("unix:")
    bind "unix://#{bind_address.gsub(%r{^unix://}, '')}"
elsif bind_address =~ /^\d+\.\d+\.\d+\.\d+:\d+$/ || bind_address =~ /^[\w\.\-]+:\d+$/
    bind "tcp://#{bind_address}"
elsif bind_address =~ /^\d+$/
    bind "tcp://127.0.0.1:#{bind_address}"
else
    bind "tcp://#{bind_address}"
end

pidfile ENV["PUMA_PID_PATH"] || "#{app_root}/tmp/pids/puma.pid"

state_path ENV["PUMA_STATE_PATH"] || "#{app_root}/tmp/pids/puma.state"

if ENV["RAILS_ENV"] == "production"
    worker_timeout (ENV["PUMA_TIMEOUT"] || 30).to_i
else
    worker_timeout (ENV["PUMA_TIMEOUT"] || 60).to_i
end

before_fork do
    if defined?(ActiveRecord)
        ActiveRecord::Base.connection_pool.disconnect! rescue nil
    end

    Discourse.preload_rails!
    Discourse.before_fork

    initialized = @puma_demons_initialized ||= false
    unless initialized
        supervisor = ENV["PUMA_SUPERVISOR_PID"].to_i
        if supervisor > 0
            Thread.new do
                loop do
                    unless File.exist?("/proc/#{supervisor}")
                        warn "Kill self: supervisor (#{supervisor}) is gone"
                        Process.kill("TERM", Process.pid) rescue nil
                    end
                    sleep 2
                end
            end
        end

        sidekiqs = ENV["PUMA_SIDEKIQS"].to_i
        if sidekiqs > 0
            begin
                warn "starting #{sidekiqs} supervised sidekiqs"
                require "demon/sidekiq"
                require "logger"

                sidekiq_logger = Logger.new(sidekiq_log_path)
                sidekiq_logger.level = Logger::INFO
                sidekiq_logger.sync = true if sidekiq_logger.respond_to?(:sync=)

                Demon::Sidekiq.after_fork { DiscourseEvent.trigger(:sidekiq_fork_started) }
                Demon::Sidekiq.start(sidekiqs, logger: sidekiq_logger) if defined?(Demon::Sidekiq)

                if Discourse.enable_sidekiq_logging?
                    Signal.trap("USR1") do
                        sleep 1
                        Demon::Sidekiq.kill("USR2")
                    end
                end
            rescue LoadError => e
                warn "Cannot require Demon::Sidekiq: #{e}"
            end
        end

        enable_email_sync_demon = ENV["DISCOURSE_ENABLE_EMAIL_SYNC_DEMON"] == "true"
        if enable_email_sync_demon
            begin
                warn "starting up EmailSync demon"
                require "demon/email_sync" if File.exist?(File.join(app_root, "lib", "demon", "email_sync.rb"))
                require "logger"

                email_sync_logger = Logger.new(email_sync_log_path)
                email_sync_logger.level = Logger::INFO
                email_sync_logger.sync = true if email_sync_logger.respond_to?(:sync=)

                Demon::EmailSync.start(1, logger: email_sync_logger) if defined?(Demon::EmailSync)
            rescue => e
                warn "EmailSync demon start failed: #{e}"
            end
        end

        if defined?(DiscoursePluginRegistry)
            DiscoursePluginRegistry.demon_processes.each do |demon_class|
                warn "starting #{demon_class.prefix} demon"
                demon_class.start(1, logger: STDOUT) rescue nil
            end
        end

        Thread.new do
            loop do
                begin
                    sleep 60
                    if defined?(Demon) && defined?(Demon::Sidekiq)
                        Demon::Sidekiq.ensure_running if Demon::Sidekiq.respond_to?(:ensure_running)
                        Demon::Sidekiq.heartbeat_check if Demon::Sidekiq.respond_to?(:heartbeat_check)
                        Demon::Sidekiq.rss_memory_check if Demon::Sidekiq.respond_to?(:rss_memory_check)
                    end
                    if enable_email_sync_demon && defined?(Demon::EmailSync)
                        Demon::EmailSync.ensure_running if Demon::EmailSync.respond_to?(:ensure_running)
                        Demon::EmailSync.check_email_sync_heartbeat if Demon::EmailSync.respond_to?(:check_email_sync_heartbeat)
                    end
                    if defined?(DiscoursePluginRegistry)
                        DiscoursePluginRegistry.demon_processes.each { |demon_class| demon_class.ensure_running rescue nil }
                    end
                rescue => e
                    warn "Error in demon processes heartbeat check: #{e}\n#{e.backtrace.join("\n")}"
                end
            end
        end

        if defined?(Redis) && defined?(Discourse) && Discourse.respond_to?(:redis)
            begin
                Discourse.redis.close
            rescue => e
                warn "Failed to close Discourse.redis in master: #{e}"
            end
        end

        @puma_demons_initialized = true
    end
end

on_worker_boot do
    if defined?(ActiveRecord)
        ActiveRecord::Base.establish_connection rescue nil
    end
    if defined?(Discourse) && Discourse.respond_to?(:after_fork)
        Discourse.after_fork
    end
    SignalTrapLogger.instance.after_fork
    Signal.trap("USR2") do
        message = <<~MSG
            Puma worker received USR2 signal indicating it is about to timeout, dumping backtrace for main thread
            #{Thread.current.backtrace&.join("\n")}
        MSG
        if defined?(SignalTrapLogger) && SignalTrapLogger.respond_to?(:instance)
            SignalTrapLogger.instance.log(STDOUT, message, level: :warn) rescue nil
        else
            warn message
        end
    end
    DiscourseEvent.trigger(:web_fork_started) if defined?(DiscourseEvent) && DiscourseEvent.respond_to?(:trigger)
    Discourse.after_unicorn_worker_fork
end

可以按需之前的.service文件里面的环境变量。

顺带一提,Puma运行上述的配置文件时,log目录下就会有5个日志文件:

  • production.log:Discourse日志文件。
  • puma.stderr.log:Puma标准错误日志文件。
  • puma.stdout.log:Puma标准输出日志文件。
  • sidekiq.log:Sidekiq日志文件。
  • email_sync.log:邮件同步日志文件。

另外,需要用Nginx来提供静态资源,所以简单编写一下Nginx配置文件(默认配置文件为/etc/nginx/nginx.conf):

user =用户名称=;
worker_processes auto;

error_log /var/log/nginx/error.log notice;
pid /run/nginx.pid;

events {
    worker_connections 1024;
}


http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    sendfile on;
    tcp_nopush on;

    keepalive_timeout 65;

    gzip off;

    include /etc/nginx/conf.d/*.conf;

    server {
        listen =Nginx监听端口=;

        root /home/=用户名称=/discourse/public;

        location / {
            try_files $uri $uri/ =404;
        }
    }
}

可以先运行update.sh了,然后希望可以成功吧()。

使用sudo systemctl status discourse查看状态,如果有Active: active (running),且照常运行(没有自动重启)。

等待同时,可以配置Windows端的Nginx:

worker_processes auto;

error_log logs/error.log error;

pid logs/nginx.pid;

events {
    worker_connections 1024;
}

http {
    include mime.types;
    default_type application/octet-stream;

    types {
        text/csv csv;
        font/ttf ttf;
        font/otf otf;
    }

    access_log logs/access.log;
    log_not_found off;

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;

    client_header_buffer_size 8k;
    client_max_body_size 0;

    gzip on;
    gzip_min_length 10k;
    gzip_comp_level 5;
    gzip_types application/json text/css text/javascript application/x-javascript application/javascript image/svg+xml application/wasm font/ttf font/otf;
    gzip_vary on;
    gzip_proxied any;

    server_names_hash_bucket_size 64;

    set_real_ip_from 127.0.0.1;
    real_ip_header X-Forwarded-For;
    real_ip_recursive on;

    server_tokens off;

    ssl_prefer_server_ciphers on;
    ssl_protocols TLSv1.3 TLSv1.2;
    ssl_early_data on;
    ssl_session_tickets on;
    ssl_session_timeout 1h;
    ssl_session_cache shared:SSL:16m;

    map $http_x_forwarded_proto $client_scheme {
        default $scheme;
        "~https$" https;
    }

    upstream discourse {
        server 127.0.0.1:=Puma监听端口=;
    }
    upstream discourse_public {
        server 127.0.0.1:=Nginx监听端口=;
    }

    proxy_cache_path =缓存文件夹= inactive=1440m levels=1:2 keys_zone=one:10m max_size=600m;

    server {
        listen 443 ssl ipv6only=off;
        server_name =网站域名=;
        http2 on;

        ssl_certificate =TLS证书位置=;
        ssl_certificate_key =TLS证书私钥位置=;

        large_client_header_buffers 4 32k;

        # 用Discourse来限制客户端文件上传大小。
        client_max_body_size 0;

        etag off;

        location ^~ /backups/ {
            internal;
        }

        location /favicon.ico {
            return 204;
            access_log off;
        }

        location / {
            add_header ETag "";

            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $client_scheme;
            proxy_set_header X-Request-Start "t=${msec}";
            proxy_set_header Client-Ip "";
            proxy_set_header Host $http_host;

            proxy_http_version 1.1;

            proxy_buffer_size 32k;
            proxy_buffers 4 32k;

            location ~ ^/uploads/short-url/ {
                proxy_pass http://discourse;

                break;
            }

            location ~ ^/(secure-media-uploads/|secure-uploads)/ {
                proxy_pass http://discourse;

                break;
            }

            location ~ ^/(fonts|assets|plugins|uploads)/.*\.(eot|ttf|woff|woff2|ico|otf)$ {
                expires 1y;
                add_header Cache-Control public,immutable;
                add_header Access-Control-Allow-Origin *;

                proxy_pass http://discourse_public;

                break;
            }

            location = /srv/status {
                access_log off;
                
                proxy_pass http://discourse;

                break;
            }

            location ~ ^/javascripts/ {
                expires 1d;
                add_header Cache-Control public,immutable;
                add_header Access-Control-Allow-Origin *;

                proxy_pass http://discourse_public;

                break;
            }

            location ~ ^/assets/ {
                gzip_static on;

                expires 1y;
                add_header Cache-Control public,immutable;

                proxy_pass http://discourse_public;

                break;
            }

            location ~ ^/plugins/ {
                expires 1y;
                add_header Cache-Control public,immutable;
                add_header Access-Control-Allow-Origin *;

                proxy_pass http://discourse_public;

                break;
            }

            location ~ /images/emoji/ {
                expires 1y;
                add_header Cache-Control public,immutable;
                add_header Access-Control-Allow-Origin *;

                proxy_pass http://discourse_public;

                break;
            }

            location ~ ^/uploads/ {
                proxy_set_header Host $http_host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Request-Start "t=${msec}";
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $client_scheme;
                proxy_set_header Client-Ip "";

                expires 1y;
                add_header Cache-Control public,immutable;

                location ~ /stylesheet-cache/ {
                    add_header Access-Control-Allow-Origin *;

                    # try_files $uri =404;
                    proxy_pass http://discourse_public;

                    break;
                }

                location ~* \.(gif|png|jpg|jpeg|bmp|tif|tiff|ico|webp|avif)$ {
                    add_header Access-Control-Allow-Origin *;

                    # try_files $uri =404;
                    proxy_pass http://discourse_public;

                    break;
                }

                location ~* \.(svg)$ {
                }

                location ~ /_?optimized/ {
                    add_header Access-Control-Allow-Origin *;

                    # try_files $uri =404;
                    proxy_pass http://discourse_public;

                    break;
                }

                proxy_pass http://discourse;

                break;
            }

            location ~ ^/admin/backups/ {
                proxy_set_header Host $http_host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Request-Start "t=${msec}";
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $client_scheme;
                proxy_set_header Client-Ip "";

                proxy_pass http://discourse;

                break;
            }

            location ~ ^/(svg-sprite/|letter_avatar/|letter_avatar_proxy/|user_avatar|highlight-js|stylesheets|theme-javascripts|favicon/proxied|service-worker|extra-locales/) {
                proxy_ignore_headers "Set-Cookie";
                proxy_hide_header "Set-Cookie";
                proxy_hide_header "X-Discourse-Username";
                proxy_hide_header "X-Runtime";

                proxy_cache one;
                proxy_cache_key "$scheme,$host,$request_uri";
                proxy_cache_valid 200 301 302 7d;

                proxy_pass http://discourse;

                break;
            }

            location /message-bus/ {
                proxy_http_version 1.1;

                proxy_buffering off;

                proxy_pass http://discourse;

                break;
            }

            try_files $uri @discourse_public;
        }

        location /downloads/ {
            internal;
            
            proxy_pass http://discourse_public;
        }

        location @discourse {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Request-Start "t=${msec}";
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $client_scheme;
            proxy_set_header X-Sendfile-Type "";
            proxy_set_header X-Accel-Mapping "";
            proxy_set_header Client-Ip "";

            proxy_pass http://discourse;
        }

        location @discourse_public {
            proxy_intercept_errors on;
            error_page 404 403 = @discourse;

            proxy_pass http://discourse_public;
        }
    }
}

访问网站,如果可以访问就大功告成,可以开始仔细设置Discourse了。