lua-resty-limit-traffic
用于在 OpenResty/ngx_lua 中限制和控制流量的 Lua 库
$ opm get openresty/lua-resty-limit-traffic
名称
lua-resty-limit-traffic - 用于在 OpenResty/ngx_lua 中限制和控制流量的 Lua 库
状态
该库已经可以使用,但仍处于高度实验阶段。
Lua API 仍在发展中,可能会在不久的将来未经通知而发生变化。
概要
# demonstrate the usage of the resty.limit.req module (alone!)
http {
lua_shared_dict my_limit_req_store 100m;
server {
location / {
access_by_lua_block {
-- well, we could put the require() and new() calls in our own Lua
-- modules to save overhead. here we put them below just for
-- convenience.
local limit_req = require "resty.limit.req"
-- limit the requests under 200 req/sec with a burst of 100 req/sec,
-- that is, we delay requests under 300 req/sec and above 200
-- req/sec, and reject any requests exceeding 300 req/sec.
local lim, err = limit_req.new("my_limit_req_store", 200, 100)
if not lim then
ngx.log(ngx.ERR,
"failed to instantiate a resty.limit.req object: ", err)
return ngx.exit(500)
end
-- the following call must be per-request.
-- here we use the remote (IP) address as the limiting key
local key = ngx.var.binary_remote_addr
local delay, err = lim:incoming(key, true)
if not delay then
if err == "rejected" then
return ngx.exit(503)
end
ngx.log(ngx.ERR, "failed to limit req: ", err)
return ngx.exit(500)
end
if delay >= 0.001 then
-- the 2nd return value holds the number of excess requests
-- per second for the specified key. for example, number 31
-- means the current request rate is at 231 req/sec for the
-- specified key.
local excess = err
-- the request exceeding the 200 req/sec but below 300 req/sec,
-- so we intentionally delay it here a bit to conform to the
-- 200 req/sec rate.
ngx.sleep(delay)
end
}
# content handler goes here. if it is content_by_lua, then you can
# merge the Lua code above in access_by_lua into your content_by_lua's
# Lua handler to save a little bit of CPU time.
}
}
}
# demonstrate the usage of the resty.limit.conn module (alone!)
http {
lua_shared_dict my_limit_conn_store 100m;
server {
location / {
access_by_lua_block {
-- well, we could put the require() and new() calls in our own Lua
-- modules to save overhead. here we put them below just for
-- convenience.
local limit_conn = require "resty.limit.conn"
-- limit the requests under 200 concurrent requests (normally just
-- incoming connections unless protocols like SPDY is used) with
-- a burst of 100 extra concurrent requests, that is, we delay
-- requests under 300 concurrent connections and above 200
-- connections, and reject any new requests exceeding 300
-- connections.
-- also, we assume a default request time of 0.5 sec, which can be
-- dynamically adjusted by the leaving() call in log_by_lua below.
local lim, err = limit_conn.new("my_limit_conn_store", 200, 100, 0.5)
if not lim then
ngx.log(ngx.ERR,
"failed to instantiate a resty.limit.conn object: ", err)
return ngx.exit(500)
end
-- the following call must be per-request.
-- here we use the remote (IP) address as the limiting key
local key = ngx.var.binary_remote_addr
local delay, err = lim:incoming(key, true)
if not delay then
if err == "rejected" then
return ngx.exit(503)
end
ngx.log(ngx.ERR, "failed to limit req: ", err)
return ngx.exit(500)
end
if lim:is_committed() then
local ctx = ngx.ctx
ctx.limit_conn = lim
ctx.limit_conn_key = key
ctx.limit_conn_delay = delay
end
-- the 2nd return value holds the current concurrency level
-- for the specified key.
local conn = err
if delay >= 0.001 then
-- the request exceeding the 200 connections ratio but below
-- 300 connections, so
-- we intentionally delay it here a bit to conform to the
-- 200 connection limit.
-- ngx.log(ngx.WARN, "delaying")
ngx.sleep(delay)
end
}
# content handler goes here. if it is content_by_lua, then you can
# merge the Lua code above in access_by_lua into your
# content_by_lua's Lua handler to save a little bit of CPU time.
log_by_lua_block {
local ctx = ngx.ctx
local lim = ctx.limit_conn
if lim then
-- if you are using an upstream module in the content phase,
-- then you probably want to use $upstream_response_time
-- instead of ($request_time - ctx.limit_conn_delay) below.
local latency = tonumber(ngx.var.request_time) - ctx.limit_conn_delay
local key = ctx.limit_conn_key
assert(key)
local conn, err = lim:leaving(key, latency)
if not conn then
ngx.log(ngx.ERR,
"failed to record the connection leaving ",
"request: ", err)
return
end
end
}
}
}
}
# demonstrate the usage of the resty.limit.traffic module
http {
lua_shared_dict my_req_store 100m;
lua_shared_dict my_conn_store 100m;
server {
location / {
access_by_lua_block {
local limit_conn = require "resty.limit.conn"
local limit_req = require "resty.limit.req"
local limit_traffic = require "resty.limit.traffic"
local lim1, err = limit_req.new("my_req_store", 300, 200)
assert(lim1, err)
local lim2, err = limit_req.new("my_req_store", 200, 100)
assert(lim2, err)
local lim3, err = limit_conn.new("my_conn_store", 1000, 1000, 0.5)
assert(lim3, err)
local limiters = {lim1, lim2, lim3}
local host = ngx.var.host
local client = ngx.var.binary_remote_addr
local keys = {host, client, client}
local states = {}
local delay, err = limit_traffic.combine(limiters, keys, states)
if not delay then
if err == "rejected" then
return ngx.exit(503)
end
ngx.log(ngx.ERR, "failed to limit traffic: ", err)
return ngx.exit(500)
end
if lim3:is_committed() then
local ctx = ngx.ctx
ctx.limit_conn = lim3
ctx.limit_conn_key = keys[3]
end
print("sleeping ", delay, " sec, states: ",
table.concat(states, ", "))
if delay >= 0.001 then
ngx.sleep(delay)
end
}
# content handler goes here. if it is content_by_lua, then you can
# merge the Lua code above in access_by_lua into your
# content_by_lua's Lua handler to save a little bit of CPU time.
log_by_lua_block {
local ctx = ngx.ctx
local lim = ctx.limit_conn
if lim then
-- if you are using an upstream module in the content phase,
-- then you probably want to use $upstream_response_time
-- instead of $request_time below.
local latency = tonumber(ngx.var.request_time)
local key = ctx.limit_conn_key
assert(key)
local conn, err = lim:leaving(key, latency)
if not conn then
ngx.log(ngx.ERR,
"failed to record the connection leaving ",
"request: ", err)
return
end
end
}
}
}
}
描述
该库提供了一些 Lua 模块,以帮助 OpenResty/ngx_lua 用户控制和限制流量,无论是请求速率还是请求并发(或两者)。
resty.limit.req 提供基于“漏桶”方法的请求速率限制和调整。
resty.limit.count 从 OpenResty 1.13.6.1+ 开始,提供基于“固定窗口”实现的速率限制。
resty.limit.conn 提供基于额外延迟的请求并发级别限制和调整。
resty.limit.traffic 提供一个聚合器,用于组合多个 resty.limit.req、resty.limit.count 或 resty.limit.conn 类(或全部)的实例。
请查看这些 Lua 模块的自身文档以获取更多详细信息。
该库为 NGINX 的标准模块 ngx_limit_req 和 ngx_limit_conn 提供了更灵活的替代方案。例如,该库提供的基于 Lua 的限制器可以在任何上下文中使用,例如在向下游 SSL 握手过程之前(如使用 ssl_certificate_by_lua
)或在发出后端请求之前。
安装
此库在 OpenResty 1.11.2.2+ 中默认启用。
如果您必须手动安装此库,请确保您至少使用 OpenResty 1.11.2.1 或包含 ngx_lua 0.10.6+ 的自定义 nginx 构建。此外,您需要配置 lua_package_path 指令,将您的 lua-resty-limit-traffic
源代码树的路径添加到 ngx_lua 的 Lua 模块搜索路径,如下所示:
# nginx.conf
http {
lua_package_path "/path/to/lua-resty-limit-traffic/lib/?.lua;;";
...
}
然后在 Lua 中加载此库提供的模块之一。例如:
local limit_req = require "resty.limit.req"
社区
英文邮件列表
The openresty-en 邮件列表适用于英语使用者。
中文邮件列表
The openresty 邮件列表适用于中文使用者。
Bug 和补丁
请通过以下方式报告 Bug 或提交补丁:
在 GitHub Issue Tracker 上创建工单,
或发布到 “OpenResty 社区”。
作者
章亦春 (agentzh) <agentzh@gmail.com>,OpenResty Inc.
版权和许可
此模块在 BSD 许可证下获得许可。
版权所有 (C) 2015-2019,作者:章亦春 (agentzh),OpenResty Inc.
保留所有权利。
在满足以下条件的情况下,允许以源代码和二进制形式重新分发和使用,无论是否修改:
源代码的重新分发必须保留上述版权声明、此条件列表和以下免责声明。
二进制形式的重新分发必须在随分发提供的文档和/或其他材料中复制上述版权声明、此条件列表和以下免责声明。
本软件由版权持有人和贡献者“按原样”提供,并且不提供任何明示或暗示的担保,包括但不限于适销性和特定用途适用性的暗示担保。在任何情况下,版权持有人或贡献者均不对任何直接的、间接的、偶然的、特殊的、惩罚性的或后果性的损害(包括但不限于替代商品或服务的采购;使用、数据或利润损失;或业务中断)负责,无论这些损害是如何引起的以及在何种责任理论下(无论是合同、严格责任还是侵权(包括疏忽或其他)),即使已被告知可能发生此类损害。
另请参阅
ngx_lua 模块:https://github.com/openresty/lua-nginx-module
OpenResty:https://openresty.org.cn/
作者
章亦春 (agentzh)
许可证
2bsd
依赖项
luajit >= 2.1.0, ngx_http_lua >= 0.10.6
版本
-
用于在 OpenResty/ngx_lua 中限制和控制流量的 Lua 库 2020-07-07 12:52:08
-
用于在 OpenResty/ngx_lua 中限制和控制流量的 Lua 库 2020-04-03 09:03:49
-
用于在 OpenResty/ngx_lua 中限制和控制流量的 Lua 库 2017-11-03 23:25:18
-
用于在 OpenResty/ngx_lua 中限制和控制流量的 Lua 库 2017-08-08 22:12:04
-
用于在 OpenResty/ngx_lua 中限制和控制流量的 Lua 库 2017-04-08 22:23:41
-
用于在 OpenResty/ngx_lua 中限制和控制流量的 Lua 库 2016-09-29 03:06:01