chinese直男口爆体育生外卖, 99久久er热在这里只有精品99, 又色又爽又黄18禁美女裸身无遮挡, gogogo高清免费观看日本电视,私密按摩师高清版在线,人妻视频毛茸茸,91论坛 兴趣闲谈,欧美 亚洲 精品 8区,国产精品久久久久精品免费

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評論與回復
登錄后你可以
  • 下載海量資料
  • 學習在線課程
  • 觀看技術視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會員中心
創(chuàng)作中心

完善資料讓更多小伙伴認識你,還能領取20積分哦,立即完善>

3天內不再提示

如何基于Nginx構建微服務網(wǎng)關

馬哥Linux運維 ? 來源:馬哥Linux運維 ? 2025-09-02 16:29 ? 次閱讀
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

基于Nginx的微服務網(wǎng)關實現(xiàn)與最佳實踐:從零打造企業(yè)級API Gateway

引言:為什么你的微服務架構需要一個強大的網(wǎng)關?

還記得上次生產環(huán)境的那個事故嗎?某個服務突然涌入大量請求,沒有限流保護,直接把下游服務打掛了。或者是那次安全審計,發(fā)現(xiàn)有些API接口裸奔在公網(wǎng)上,沒有任何認證機制。又或者是運維同學半夜被叫起來,因為某個服務的日志分散在十幾臺機器上,根本無法快速定位問題...

這些痛點,其實都指向同一個解決方案:你需要一個統(tǒng)一的API網(wǎng)關。

今天,我將分享我們團隊如何基于Nginx構建了一個日均處理10億+請求的微服務網(wǎng)關,以及踩過的那些坑。這套方案已經穩(wěn)定運行2年+,經歷過多次大促考驗。

一、架構設計:不只是反向代理那么簡單

1.1 整體架構設計

我們的網(wǎng)關架構分為四層:

├── 接入層(DNS + CDN)
├── 網(wǎng)關層(Nginx + OpenResty)
├── 服務層(微服務集群)
└── 數(shù)據(jù)層(Redis + MySQL + MongoDB)

核心設計理念:

?高可用:多活部署,自動故障轉移

?高性能:充分利用Nginx的事件驅動模型

?可擴展:基于OpenResty的Lua腳本擴展

?可觀測:完整的監(jiān)控和日志體系

1.2 技術選型對比

方案 優(yōu)勢 劣勢 適用場景
Nginx + OpenResty 性能極高、穩(wěn)定性好、運維成熟 功能相對簡單、需要二次開發(fā) 高并發(fā)、低延遲要求
Kong 功能豐富、插件生態(tài)好 性能損耗較大、運維復雜 中小規(guī)模、快速搭建
Spring Cloud Gateway Java生態(tài)友好、功能完善 性能一般、資源占用高 Java技術棧
Envoy 云原生、功能強大 學習曲線陡、配置復雜 K8s環(huán)境

二、核心功能實現(xiàn):從配置到代碼

2.1 動態(tài)路由配置

傳統(tǒng)的Nginx配置需要reload才能生效,這在生產環(huán)境是不可接受的。我們的解決方案:

# nginx.conf 核心配置
http{
 # 引入Lua模塊
 lua_package_path"/usr/local/openresty/lualib/?.lua;;";
 lua_shared_dictroutes_cache100m;
 lua_shared_dictupstream_cache100m;
 
 # 初始化階段加載路由
 init_by_lua_block{
   localroute_manager = require"gateway.route_manager"
    route_manager.init()
  }
 
 # 定時更新路由配置
  init_worker_by_lua_block {
   localroute_manager = require"gateway.route_manager"
    -- 每10秒從配置中心拉取最新路由
    ngx.timer.every(10, route_manager.sync_routes)
  }
 
  server {
   listen80;
   server_nameapi.example.com;
   
   location/ {
     # 動態(tài)路由處理
     access_by_lua_block{
       localrouter = require"gateway.router"
        router.route()
      }
     
     # 動態(tài)upstream
      proxy_pass http://$upstream;
     
     # 標準代理配置
     proxy_set_headerHost$host;
     proxy_set_headerX-Real-IP$remote_addr;
     proxy_set_headerX-Forwarded-For$proxy_add_x_forwarded_for;
     proxy_set_headerX-Request-Id$request_id;
    }
  }
}

對應的Lua路由模塊:

-- gateway/router.lua
local_M = {}
localroutes_cache = ngx.shared.routes_cache
localcjson =require"cjson"

function_M.route()
 localuri = ngx.var.uri
 localmethod = ngx.var.request_method
 
 -- 從緩存獲取路由配置
 localroute_key = method ..":".. uri
 localroute_data = routes_cache:get(route_key)
 
 ifnotroute_datathen
   -- 模糊匹配邏輯
    route_data = _M.fuzzy_match(uri, method)
 end
 
 ifroute_datathen
   localroute = cjson.decode(route_data)
   
   -- 設置upstream
    ngx.var.upstream = route.upstream
   
   -- 添加自定義header
   ifroute.headersthen
     fork, vinpairs(route.headers)do
        ngx.req.set_header(k, v)
     end
   end
   
   -- 路徑重寫
   ifroute.rewritethen
      ngx.req.set_uri(route.rewrite)
   end
 else
    ngx.exit(404)
 end
end

function_M.fuzzy_match(uri, method)
 -- 實現(xiàn)路徑參數(shù)匹配 /api/user/{id} -> /api/user/123
 localall_routes = routes_cache:get("all_routes")
 ifnotall_routesthen
   returnnil
 end
 
 localroutes = cjson.decode(all_routes)
 for_, routeinipairs(routes)do
   localpattern = route.path:gsub("{.-}","([^/]+)")
   localmatches = {ngx.re.match(uri,"^".. pattern .."$")}
   
   ifmatchesandroute.method == methodthen
     -- 提取路徑參數(shù)
     localparams = {}
     fori,matchinipairs(matches)do
       ifi >1then
          params[route.params[i-1]] =match
       end
     end
     
     -- 將參數(shù)傳遞給upstream
      ngx.ctx.path_params = params
     returncjson.encode(route)
   end
 end
 
 returnnil
end

return_M

2.2 智能負載均衡

不僅僅是輪詢,我們實現(xiàn)了基于響應時間的動態(tài)權重調整:

-- gateway/balancer.lua
local_M = {}
localupstream_cache = ngx.shared.upstream_cache

function_M.get_server(upstream_name)
 localservers_key ="servers:".. upstream_name
 localservers_data = upstream_cache:get(servers_key)
 
 ifnotservers_datathen
   returnnil
 end
 
 localservers = cjson.decode(servers_data)
 
 -- 基于加權響應時間選擇服務器
 localtotal_weight =0
 localweighted_servers = {}
 
 for_, serverinipairs(servers)do
   -- 獲取服務器統(tǒng)計信息
   localstats_key ="stats:".. server.host ..":".. server.port
   localstats = upstream_cache:get(stats_key)
   
   ifstatsthen
      stats = cjson.decode(stats)
     -- 響應時間越短,權重越高
     localweight =1000/ (stats.avg_response_time +1)
     -- 考慮錯誤率
      weight = weight * (1- stats.error_rate)
     -- 考慮服務器配置的基礎權重
      weight = weight * server.weight
     
     table.insert(weighted_servers, {
        server = server,
        weight = weight,
        range_start = total_weight,
        range_end = total_weight + weight
      })
     
      total_weight = total_weight + weight
   else
     -- 新服務器,給予默認權重
     table.insert(weighted_servers, {
        server = server,
        weight = server.weight,
        range_start = total_weight,
        range_end = total_weight + server.weight
      })
      total_weight = total_weight + server.weight
   end
 end
 
 -- 加權隨機選擇
 localrandom_weight =math.random() * total_weight
 
 for_, wsinipairs(weighted_servers)do
   ifrandom_weight >= ws.range_startandrandom_weight < ws.range_end?then
? ? ? ? ? ??return?ws.server
? ? ? ??end
? ??end
? ??
? ??-- 兜底返回第一個
? ??return?servers[1]
end

-- 更新服務器統(tǒng)計信息
function?_M.update_stats(server, response_time, is_error)
? ??local?stats_key =?"stats:"?.. server.host ..?":"?.. server.port
? ??local?stats = upstream_cache:get(stats_key)
? ??
? ??if?not?stats?then
? ? ? ? stats = {
? ? ? ? ? ? total_requests =?0,
? ? ? ? ? ? total_response_time =?0,
? ? ? ? ? ? avg_response_time =?0,
? ? ? ? ? ? error_count =?0,
? ? ? ? ? ? error_rate =?0
? ? ? ? }
? ??else
? ? ? ? stats = cjson.decode(stats)
? ??end
? ??
? ??-- 更新統(tǒng)計
? ? stats.total_requests = stats.total_requests +?1
? ? stats.total_response_time = stats.total_response_time + response_time
? ? stats.avg_response_time = stats.total_response_time / stats.total_requests
? ??
? ??if?is_error?then
? ? ? ? stats.error_count = stats.error_count +?1
? ??end
? ??
? ? stats.error_rate = stats.error_count / stats.total_requests
? ??
? ??-- 保存統(tǒng)計,設置過期時間防止內存泄漏
? ? upstream_cache:set(stats_key, cjson.encode(stats),?300)
end

return?_M

2.3 限流熔斷機制

基于令牌桶算法的分布式限流實現(xiàn):

-- gateway/rate_limiter.lua
local_M = {}
localredis =require"resty.redis"

-- 令牌桶限流
function_M.token_bucket_limit(key, rate, capacity)
 localred = redis:new()
  red:set_timeout(1000)
 
 localok, err = red:connect("127.0.0.1",6379)
 ifnotokthen
    ngx.log(ngx.ERR,"Redis連接失敗: ", err)
   returntrue-- 降級放行
 end
 
 -- Lua腳本原子操作
 localscript =[[
    local key = KEYS[1]
    local rate = tonumber(ARGV[1])
    local capacity = tonumber(ARGV[2])
    local now = tonumber(ARGV[3])
    local requested = tonumber(ARGV[4] or 1)
   
    local bucket = redis.call('HMGET', key, 'tokens', 'last_refill')
    local tokens = tonumber(bucket[1] or capacity)
    local last_refill = tonumber(bucket[2] or now)
   
    -- 計算應該添加的令牌數(shù)
    local elapsed = math.max(0, now - last_refill)
    local tokens_to_add = elapsed * rate
    tokens = math.min(capacity, tokens + tokens_to_add)
   
    if tokens >= requested then
      tokens = tokens - requested
      redis.call('HMSET', key, 'tokens', tokens, 'last_refill', now)
      redis.call('EXPIRE', key, capacity / rate + 1)
      return 1
    else
      redis.call('HMSET', key, 'tokens', tokens, 'last_refill', now)
      redis.call('EXPIRE', key, capacity / rate + 1)
      return 0
    end
  ]]
 
 localnow = ngx.now()
 localres = red:eval(script,1, key, rate, capacity, now,1)
 
  red:set_keepalive(10000,100)
 
 returnres ==1
end

-- 熔斷器實現(xiàn)
function_M.circuit_breaker(service_name)
 localbreaker_key ="breaker:".. service_name
 localbreaker_cache = ngx.shared.breaker_cache
 
 -- 獲取熔斷器狀態(tài)
 localstate = breaker_cache:get(breaker_key ..":state")or"closed"
 
 ifstate =="open"then
   -- 檢查是否到了半開時間
   localopen_time = breaker_cache:get(breaker_key ..":open_time")
   ifngx.now() - open_time >30then-- 30秒后嘗試半開
      breaker_cache:set(breaker_key ..":state","half_open")
      state ="half_open"
   else
     returnfalse,"Circuit breaker is open"
   end
 end
 
 ifstate =="half_open"then
   -- 半開狀態(tài),允許少量請求通過
   localhalf_open_count = breaker_cache:incr(breaker_key ..":half_open_count",1,0)
   ifhalf_open_count >5then-- 只允許5個請求
     returnfalse,"Circuit breaker is half open, limit exceeded"
   end
 end
 
 returntrue
end

-- 更新熔斷器狀態(tài)
function_M.update_breaker(service_name, is_success)
 localbreaker_key ="breaker:".. service_name
 localbreaker_cache = ngx.shared.breaker_cache
 
 localstate = breaker_cache:get(breaker_key ..":state")or"closed"
 
 ifstate =="closed"then
   ifnotis_successthen
     -- 增加失敗計數(shù)
     localfail_count = breaker_cache:incr(breaker_key ..":fail_count",1,0,60)
     
     -- 10秒內失敗10次,打開熔斷器
     iffail_count >=10then
        breaker_cache:set(breaker_key ..":state","open")
        breaker_cache:set(breaker_key ..":open_time", ngx.now())
        ngx.log(ngx.WARN,"Circuit breaker opened for: ", service_name)
     end
   end
 elseifstate =="half_open"then
   ifis_successthen
     -- 半開狀態(tài)成功,關閉熔斷器
      breaker_cache:set(breaker_key ..":state","closed")
      breaker_cache:delete(breaker_key ..":fail_count")
      breaker_cache:delete(breaker_key ..":half_open_count")
      ngx.log(ngx.INFO,"Circuit breaker closed for: ", service_name)
   else
     -- 半開狀態(tài)失敗,重新打開
      breaker_cache:set(breaker_key ..":state","open")
      breaker_cache:set(breaker_key ..":open_time", ngx.now())
   end
 end
end

return_M

2.4 統(tǒng)一認證鑒權

JWT認證和細粒度權限控制:

-- gateway/auth.lua
local_M = {}
localjwt =require"resty.jwt"
localredis =require"resty.redis"

-- JWT驗證
function_M.verify_jwt()
 localauth_header = ngx.var.http_authorization
 
 ifnotauth_headerthen
   returnfalse,"Missing authorization header"
 end
 
 local_, _, token =string.find(auth_header,"Bearer%s+(.+)")
 
 ifnottokenthen
   returnfalse,"Invalid authorization header format"
 end
 
 -- 驗證JWT
 localjwt_secret =os.getenv("JWT_SECRET")or"your-secret-key"
 localjwt_obj = jwt:verify(jwt_secret, token)
 
 ifnotjwt_obj.verifiedthen
   returnfalse, jwt_obj.reason
 end
 
 -- 檢查token是否在黑名單中(用于主動失效)
 localred = redis:new()
  red:set_timeout(1000)
 localok, err = red:connect("127.0.0.1",6379)
 
 ifokthen
   localblacklisted = red:get("blacklist:".. token)
   ifblacklistedthen
      red:set_keepalive(10000,100)
     returnfalse,"Token has been revoked"
   end
    red:set_keepalive(10000,100)
 end
 
 -- 將用戶信息存入上下文
  ngx.ctx.user = jwt_obj.payload
 
 returntrue
end

-- 權限檢查
function_M.check_permission(required_permission)
 localuser = ngx.ctx.user
 
 ifnotuserthen
   returnfalse,"User not authenticated"
 end
 
 -- 從緩存或數(shù)據(jù)庫獲取用戶權限
 localpermissions = _M.get_user_permissions(user.user_id)
 
 -- 支持通配符匹配
 for_, perminipairs(permissions)do
   if_M.match_permission(perm, required_permission)then
     returntrue
   end
 end
 
 returnfalse,"Permission denied"
end

-- 權限匹配(支持通配符)
function_M.match_permission(user_perm, required_perm)
 -- 將權限字符串轉換為模式
 -- user* 可以匹配 userprofile
 localpattern = user_perm:gsub("*",".*")
  pattern ="^".. pattern .."$"
 
 returnngx.re.match(required_perm, pattern) ~=nil
end

-- 獲取用戶權限(帶緩存)
function_M.get_user_permissions(user_id)
 localcache_key ="permissions:".. user_id
 localpermissions_cache = ngx.shared.permissions_cache
 
 -- 先從本地緩存獲取
 localcached = permissions_cache:get(cache_key)
 ifcachedthen
   returncjson.decode(cached)
 end
 
 -- 從Redis獲取
 localred = redis:new()
  red:set_timeout(1000)
 localok, err = red:connect("127.0.0.1",6379)
 
 ifnotokthen
    ngx.log(ngx.ERR,"Redis連接失敗: ", err)
   return{}
 end
 
 localpermissions = red:smembers("user".. user_id)
  red:set_keepalive(10000,100)
 
 -- 緩存5分鐘
  permissions_cache:set(cache_key, cjson.encode(permissions),300)
 
 returnpermissions
end

-- API簽名驗證(防重放攻擊)
function_M.verify_signature()
 localsignature = ngx.var.http_x_signature
 localtimestamp = ngx.var.http_x_timestamp
 localnonce = ngx.var.http_x_nonce
 
 ifnotsignatureornottimestampornotnoncethen
   returnfalse,"Missing signature headers"
 end
 
 -- 檢查時間戳(5分鐘內有效)
 localcurrent_time = ngx.now()
 ifmath.abs(current_time -tonumber(timestamp)) >300then
   returnfalse,"Request expired"
 end
 
 -- 檢查nonce是否已使用
 localred = redis:new()
  red:set_timeout(1000)
 localok, err = red:connect("127.0.0.1",6379)
 
 ifokthen
   localnonce_key ="nonce:".. nonce
   localexists = red:get(nonce_key)
   
   ifexiststhen
      red:set_keepalive(10000,100)
     returnfalse,"Nonce already used"
   end
   
   -- 記錄nonce,5分鐘過期
    red:setex(nonce_key,300,"1")
    red:set_keepalive(10000,100)
 end
 
 -- 驗證簽名
 localmethod = ngx.var.request_method
 localuri = ngx.var.uri
 localbody = ngx.req.get_body_data()or""
 
 localsign_string = method .. uri .. timestamp .. nonce .. body
 localapp_secret = _M.get_app_secret(ngx.var.http_x_app_id)
 
 localexpected_signature = ngx.encode_base64(
    ngx.hmac_sha256(app_secret, sign_string)
  )
 
 ifsignature ~= expected_signaturethen
   returnfalse,"Invalid signature"
 end
 
 returntrue
end

return_M

2.5 請求響應轉換

處理不同版本API的兼容性:

-- gateway/transformer.lua
local_M = {}
localcjson =require"cjson"

-- 請求轉換
function_M.transform_request()
 localuri = ngx.var.uri
 localversion = ngx.var.http_x_api_versionor"v2"
 
 -- 根據(jù)版本轉換請求
 ifversion =="v1"then
    _M.transform_v1_to_v2_request()
 end
 
 -- 添加追蹤頭
 ifnotngx.var.http_x_request_idthen
    ngx.req.set_header("X-Request-Id", ngx.var.request_id)
 end
 
 -- 添加來源標識
  ngx.req.set_header("X-Gateway-Time", ngx.now())
  ngx.req.set_header("X-Forwarded-Host", ngx.var.host)
  ngx.req.set_header("X-Forwarded-Proto", ngx.var.scheme)
end

-- V1到V2的請求轉換
function_M.transform_v1_to_v2_request()
  ngx.req.read_body()
 localbody = ngx.req.get_body_data()
 
 ifbodythen
   localdata = cjson.decode(body)
   
   -- 字段映射
   localfield_mapping = {
      user_name ="username",
      user_id ="userId",
      create_time ="createdAt"
    }
   
   forold_field, new_fieldinpairs(field_mapping)do
     ifdata[old_field]then
        data[new_field] = data[old_field]
        data[old_field] =nil
     end
   end
   
   -- 更新請求體
    ngx.req.set_body_data(cjson.encode(data))
 end
end

-- 響應轉換
function_M.transform_response()
 localversion = ngx.var.http_x_api_versionor"v2"
 
 ifversion =="v1"then
   -- 獲取響應體
   localresp_body = ngx.arg[1]
   
   ifresp_bodythen
     localok, data =pcall(cjson.decode, resp_body)
     
     ifokthen
       -- V2到V1的響應轉換
        data = _M.transform_v2_to_v1_response(data)
       
       -- 更新響應體
        ngx.arg[1] = cjson.encode(data)
     end
   end
 end
 
 -- 添加響應頭
  ngx.header["X-Gateway-Response-Time"] = ngx.now() - ngx.ctx.start_time
  ngx.header["X-Request-Id"] = ngx.var.request_id
end

-- V2到V1的響應轉換
function_M.transform_v2_to_v1_response(data)
 -- 字段映射(反向)
 localfield_mapping = {
    username ="user_name",
    userId ="user_id",
    createdAt ="create_time"
  }
 
 localfunctiontransform_object(obj)
   iftype(obj) ~="table"then
     returnobj
   end
   
   fornew_field, old_fieldinpairs(field_mapping)do
     ifobj[new_field]then
        obj[old_field] = obj[new_field]
        obj[new_field] =nil
     end
   end
   
   -- 遞歸處理嵌套對象
   fork, vinpairs(obj)do
      obj[k] = transform_object(v)
   end
   
   returnobj
 end
 
 returntransform_object(data)
end

-- 協(xié)議轉換(GraphQL to REST)
function_M.graphql_to_rest()
 localbody = ngx.req.get_body_data()
 
 ifnotbodythen
   return
 end
 
 localgraphql_query = cjson.decode(body)
 
 -- 解析GraphQL查詢
 localoperation = graphql_query.query:match("(%w+)%s*{")
 
 -- 映射到REST端點
 localendpoint_mapping = {
    getUser = {method ="GET",path="/api/users/"},
    createUser = {method ="POST",path="/api/users"},
    updateUser = {method ="PUT",path="/api/users/"},
    deleteUser = {method ="DELETE",path="/api/users/"}
  }
 
 localmapping = endpoint_mapping[operation]
 
 ifmappingthen
   -- 提取參數(shù)
   localparams = graphql_query.variablesor{}
   
   -- 轉換為REST請求
    ngx.req.set_method(ngx[mapping.method])
   
   ifparams.idthen
      ngx.req.set_uri(mapping.path.. params.id)
      params.id =nil
   else
      ngx.req.set_uri(mapping.path)
   end
   
   -- 設置請求體
   ifmapping.method ~="GET"then
      ngx.req.set_body_data(cjson.encode(params))
   end
 end
end

return_M

三、性能優(yōu)化:讓網(wǎng)關飛起來

3.1 緩存策略

多級緩存架構,大幅提升響應速度:

# 配置本地緩存
proxy_cache_path/var/cache/nginx/api_cache
  levels=1:2
  keys_zone=api_cache:100m
  max_size=10g
  inactive=60m
  use_temp_path=off;

# 配置緩存清理
proxy_cache_path/var/cache/nginx/static_cache
  levels=1:2
  keys_zone=static_cache:50m
  max_size=5g
  inactive=7d
  use_temp_path=off;

server{
 location/api/ {
   # 定義緩存key
   set$cache_key"$scheme$request_method$host$request_uri$is_args$args";
   
   # Lua處理緩存邏輯
   access_by_lua_block{
     localcache = require"gateway.cache"
      cache.handle_cache()
    }
   
   # Nginx緩存配置
    proxy_cache api_cache;
   proxy_cache_key$cache_key;
   proxy_cache_valid2003045m;
   proxy_cache_valid4041m;
   proxy_cache_use_staleerrortimeout updating http_500 http_502 http_503 http_504;
   proxy_cache_background_updateon;
   proxy_cache_lockon;
   proxy_cache_lock_timeout5s;
   
   # 添加緩存狀態(tài)頭
   add_headerX-Cache-Status$upstream_cache_status;
   
   proxy_passhttp://backend;
  }
 
 location/static/ {
   proxy_cachestatic_cache;
   proxy_cache_valid2003047d;
   proxy_cache_validany1h;
   
   # 支持斷點續(xù)傳
   proxy_set_headerRange$http_range;
   proxy_set_headerIf-Range$http_if_range;
   
   proxy_passhttp://static_backend;
  }
}

智能緩存控制Lua腳本:

-- gateway/cache.lua
local_M = {}
localredis =require"resty.redis"

function_M.handle_cache()
 localmethod = ngx.var.request_method
 localuri = ngx.var.uri
 
 -- 只緩存GET請求
 ifmethod ~="GET"then
   return
 end
 
 -- 根據(jù)用戶身份生成緩存key
 localcache_key = _M.generate_cache_key()
 
 -- 先從Redis獲取緩存
 localcached_response = _M.get_from_redis(cache_key)
 
 ifcached_responsethen
   -- 檢查緩存是否過期
   ifnot_M.is_stale(cached_response)then
      ngx.header["Content-Type"] = cached_response.content_type
      ngx.header["X-Cache-Hit"] ="redis"
      ngx.say(cached_response.body)
      ngx.exit(200)
   else
     -- 異步更新緩存
      ngx.timer.at(0, _M.refresh_cache, cache_key, uri)
   end
 end
 
 -- 設置響應處理
  ngx.ctx.cache_key = cache_key
  ngx.ctx.should_cache =true
end

function_M.generate_cache_key()
 localuser = ngx.ctx.user
 localuri = ngx.var.uri
 localargs = ngx.var.argsor""
 
 -- 考慮用戶個性化
 localuser_id = useranduser.user_idor"anonymous"
 
 -- 生成緩存key
 localcache_key = ngx.md5(user_id ..":".. uri ..":".. args)
 
 returncache_key
end

function_M.get_from_redis(key)
 localred = redis:new()
  red:set_timeout(1000)
 
 localok, err = red:connect("127.0.0.1",6379)
 ifnotokthen
   returnnil
 end
 
 localres = red:get("cache:".. key)
  red:set_keepalive(10000,100)
 
 if
  ```lua
 ifresandres ~= ngx.nullthen
   returncjson.decode(res)
 end
 
 returnnil
end

function_M.is_stale(cached_response)
 localttl = cached_response.ttlor300
 localcached_time = cached_response.cached_ator0
 
 return(ngx.now() - cached_time) > ttl
end

function_M.refresh_cache(cache_key, uri)
 -- 異步請求后端更新緩存
 localhttpc =require("resty.http").new()
 
 localres, err = httpc:request_uri("http://backend".. uri, {
    method ="GET",
    headers = {
      ["X-Cache-Refresh"] ="true"
    }
  })
 
 ifresandres.status==200then
    _M.save_to_redis(cache_key, res.body, res.headers)
 end
end

function_M.save_to_redis(key, body, headers)
 localred = redis:new()
  red:set_timeout(1000)
 
 localok, err = red:connect("127.0.0.1",6379)
 ifnotokthen
   return
 end
 
 localcache_data = {
    body = body,
    content_type = headers["Content-Type"],
    cached_at = ngx.now(),
    ttl =300
  }
 
  red:setex("cache:".. key,300, cjson.encode(cache_data))
  red:set_keepalive(10000,100)
end

return_M

3.2 連接池優(yōu)化

# upstream連接池配置
upstreambackend {
 # 使用keepalive保持長連接
 keepalive256;
 keepalive_requests1000;
 keepalive_timeout60s;
 
 # 動態(tài)服務器列表
 server192.168.1.10:8080max_fails=2fail_timeout=10s;
 server192.168.1.11:8080max_fails=2fail_timeout=10s;
 server192.168.1.12:8080max_fails=2fail_timeout=10sbackup;
 
 # 使用least_conn負載均衡算法
  least_conn;
}

http{
 # 優(yōu)化連接配置
 keepalive_timeout65;
 keepalive_requests100;
 
 # 優(yōu)化代理設置
 proxy_connect_timeout5s;
 proxy_send_timeout60s;
 proxy_read_timeout60s;
 proxy_buffer_size32k;
 proxy_buffers464k;
 proxy_busy_buffers_size128k;
 proxy_temp_file_write_size256k;
 
 # 開啟HTTP/2
 http2_max_field_size16k;
 http2_max_header_size32k;
 
 # 上游連接池復用
 proxy_http_version1.1;
 proxy_set_headerConnection"";
}

3.3 內存管理優(yōu)化

-- gateway/memory_manager.lua
local_M = {}

-- 定期清理過期緩存
function_M.cleanup_expired_cache()
 localcache_dict = ngx.shared.routes_cache
 localkeys = cache_dict:get_keys(0) -- 獲取所有鍵
 
 for_, keyinipairs(keys)do
   localttl = cache_dict:ttl(key)
   
   -- 清理即將過期的鍵
   ifttlandttl 0.8then
      ngx.log(ngx.WARN,string.format(
       "Memory usage warning: %s is %.2f%% full",
        name,
        usage *100
      ))
     
     -- 觸發(fā)清理
      _M.force_cleanup(name)
   end
 end
 
 returnmemory_stats
end

-- 強制清理緩存
function_M.force_cleanup(cache_name)
 localcache = ngx.shared[cache_name]
 
 ifnotcachethen
   return
 end
 
 -- 使用LRU策略清理
  cache:flush_expired()
 
 -- 如果還是不夠,清理最舊的10%
 localkeys = cache:get_keys(0)
 localto_delete =math.floor(#keys *0.1)
 
 fori =1, to_deletedo
    cache:delete(keys[i])
 end
end

-- 初始化定時任務
function_M.init_timers()
 -- 每分鐘清理一次
  ngx.timer.every(60, _M.cleanup_expired_cache)
 
 -- 每5分鐘監(jiān)控一次內存
  ngx.timer.every(300, _M.monitor_memory)
end

return_M

四、監(jiān)控告警:可觀測性建設

4.1 日志采集方案

-- gateway/logger.lua
local_M = {}
localcjson =require"cjson"

-- 結構化日志
function_M.access_log()
 locallog_data = {
   -- 基礎信息
    timestamp = ngx.now(),
    request_id = ngx.var.request_id,
   
   -- 請求信息
    method = ngx.var.request_method,
    uri = ngx.var.uri,
    args = ngx.var.args,
    host = ngx.var.host,
   
   -- 客戶端信息
    client_ip = ngx.var.remote_addr,
    user_agent = ngx.var.http_user_agent,
    referer = ngx.var.http_referer,
   
   -- 響應信息
   status= ngx.var.status,
    bytes_sent = ngx.var.bytes_sent,
    request_time = ngx.var.request_time,
    upstream_response_time = ngx.var.upstream_response_time,
   
   -- 上游信息
    upstream_addr = ngx.var.upstream_addr,
    upstream_status = ngx.var.upstream_status,
   
   -- 緩存信息
    cache_status = ngx.var.upstream_cache_status,
   
   -- 用戶信息
    user_id = ngx.ctx.userandngx.ctx.user.user_idornil,
   
   -- 追蹤信息
    trace_id = ngx.var.http_x_trace_id,
    span_id = ngx.var.http_x_span_id
  }
 
 -- 異步寫入日志
  _M.write_log(log_data)
 
 -- 慢請求告警
 iftonumber(ngx.var.request_time) >3then
    _M.alert_slow_request(log_data)
 end
 
 -- 錯誤告警
 iftonumber(ngx.var.status) >=500then
    _M.alert_error(log_data)
 end
end

-- 寫入日志到Kafka
function_M.write_log(log_data)
 localkafka_producer =require"resty.kafka.producer"
 
 localbroker_list = {
    {host ="127.0.0.1", port =9092}
  }
 
 localproducer = kafka_producer:new(broker_list, {
    producer_type ="async",
    batch_num =200,
    batch_size =1048576,
    max_buffering =50000
  })
 
 localok, err = producer:send("gateway-logs",nil, cjson.encode(log_data))
 
 ifnotokthen
    ngx.log(ngx.ERR,"Failed to send log to Kafka: ", err)
   -- 降級寫入本地文件
    _M.write_local_log(log_data)
 end
end

-- 本地日志備份
function_M.write_local_log(log_data)
 localfile =io.open("/var/log/nginx/gateway_access.log","a+")
 iffilethen
    file:write(cjson.encode(log_data) .."
")
    file:close()
 end
end

-- 慢請求告警
function_M.alert_slow_request(log_data)
 localalert = {
   type="SLOW_REQUEST",
    level ="WARNING",
    service ="api-gateway",
    message =string.format(
     "Slow request detected: %s %s took %.2fs",
      log_data.method,
      log_data.uri,
      log_data.request_time
    ),
    details = log_data,
    timestamp = ngx.now()
  }
 
  _M.send_alert(alert)
end

-- 發(fā)送告警
function_M.send_alert(alert)
 localhttpc =require("resty.http").new()
 
 -- 發(fā)送到告警平臺
  ngx.timer.at(0,function()
    httpc:request_uri("http://alert-system/api/alerts", {
      method ="POST",
      body = cjson.encode(alert),
      headers = {
        ["Content-Type"] ="application/json"
      }
    })
 end)
end

return_M

4.2 Metrics采集

-- gateway/metrics.lua
local_M = {}
localprometheus =require"nginx.prometheus"

-- 初始化Prometheus metrics
function_M.init()
  prometheus.init("prometheus_metrics")
 
 -- 定義metrics
  _M.request_count = prometheus:counter(
   "gateway_requests_total",
   "Total number of requests",
    {"method","path","status"}
  )
 
  _M.request_duration = prometheus:histogram(
   "gateway_request_duration_seconds",
   "Request duration in seconds",
    {"method","path"}
  )
 
  _M.upstream_duration = prometheus:histogram(
   "gateway_upstream_duration_seconds",
   "Upstream response time in seconds",
    {"upstream","method","path"}
  )
 
  _M.active_connections = prometheus:gauge(
   "gateway_active_connections",
   "Number of active connections"
  )
 
  _M.rate_limit_hits = prometheus:counter(
   "gateway_rate_limit_hits_total",
   "Number of rate limit hits",
    {"client","rule"}
  )
 
  _M.circuit_breaker_state = prometheus:gauge(
   "gateway_circuit_breaker_state",
   "Circuit breaker state (0=closed, 1=open, 2=half-open)",
    {"service"}
  )
end

-- 記錄請求metrics
function_M.log()
 localmethod = ngx.var.request_method
 localpath= ngx.var.uri
 localstatus= ngx.var.status
 
 -- 請求計數(shù)
  _M.request_count:inc(1, {method,path,status})
 
 -- 請求耗時
 localrequest_time =tonumber(ngx.var.request_time)or0
  _M.request_duration:observe(request_time, {method,path})
 
 -- 上游耗時
 localupstream_time =tonumber(ngx.var.upstream_response_time)or0
 localupstream = ngx.var.upstream_addror"unknown"
  _M.upstream_duration:observe(upstream_time, {upstream, method,path})
 
 -- 活躍連接數(shù)
  _M.active_connections:set(ngx.var.connections_active)
end

-- 暴露metrics端點
function_M.collect()
  prometheus:collect()
end

return_M

4.3 健康檢查

# 健康檢查配置
upstreambackend {
 server192.168.1.10:8080;
 server192.168.1.11:8080;
 
 # 主動健康檢查
 checkinterval=3000rise=2fall=3timeout=1000type=http;
 check_http_send"GET /health HTTP/1.0

";
 check_http_expect_alivehttp_2xx http_3xx;
}

server{
 # 網(wǎng)關自身健康檢查端點
 location/health {
   access_logoff;
   
   content_by_lua_block{
     localhealth = require"gateway.health"
      health.check()
    }
  }
 
 # Prometheus metrics端點
  location /metrics {
   access_logoff;
   
   content_by_lua_block{
     localmetrics = require"gateway.metrics"
      metrics.collect()
    }
  }
 
 # 上游健康狀態(tài)頁面
  location /upstream_status {
   access_logoff;
   
    check_status;
   
   access_by_lua_block{
      -- 簡單的IP白名單
     localallowed_ips = {
        ["127.0.0.1"] = true,
        ["10.0.0.0/8"] =true
      }
     
      local client_ip = ngx.var.remote_addr
      if not allowed_ips[client_ip] then
        ngx.exit(403)
      end
    }
  }
}

對應的健康檢查Lua模塊:

-- gateway/health.lua
local_M = {}

function_M.check()
 localchecks = {}
 localhealthy =true
 
 -- 檢查Redis連接
 localredis_health = _M.check_redis()
  checks.redis = redis_health
  healthy = healthyandredis_health.healthy
 
 -- 檢查上游服務
 localupstream_health = _M.check_upstreams()
  checks.upstreams = upstream_health
  healthy = healthyandupstream_health.healthy
 
 -- 檢查內存使用
 localmemory_health = _M.check_memory()
  checks.memory = memory_health
  healthy = healthyandmemory_health.healthy
 
 -- 返回結果
 localstatus= healthyand200or503
 
  ngx.status=status
  ngx.header["Content-Type"] ="application/json"
  ngx.say(cjson.encode({
   status= healthyand"UP"or"DOWN",
    timestamp = ngx.now(),
    checks = checks
  }))
end

function_M.check_redis()
 localredis =require"resty.redis"
 localred = redis:new()
  red:set_timeout(1000)
 
 localok, err = red:connect("127.0.0.1",6379)
 
 ifnotokthen
   return{
      healthy =false,
      message ="Redis connection failed: ".. err
    }
 end
 
 -- 測試讀寫
 localres = red:ping()
  red:set_keepalive(10000,100)
 
 return{
    healthy = res =="PONG",
    message = res =="PONG"and"Redis is healthy"or"Redis ping failed"
  }
end

function_M.check_upstreams()
 localupstream_cache = ngx.shared.upstream_cache
 localall_upstreams = upstream_cache:get("all_upstreams")
 
 ifnotall_upstreamsthen
   return{
      healthy =false,
      message ="No upstreams configured"
    }
 end
 
 localupstreams = cjson.decode(all_upstreams)
 localhealthy_count =0
 localtotal_count =0
 
 forname, serversinpairs(upstreams)do
   for_, serverinipairs(servers)do
      total_count = total_count +1
     
     localstats_key ="stats:".. server.host ..":".. server.port
     localstats = upstream_cache:get(stats_key)
     
     ifstatsthen
        stats = cjson.decode(stats)
       ifstats.error_rate 0.5, -- 超過50%健康即可
    message =string.format("%d/%d upstreams healthy", healthy_count, total_count),
    details = {
      healthy = healthy_count,
      total = total_count,
      ratio = health_ratio
    }
  }
end

function_M.check_memory()
 localmemory_manager =require"gateway.memory_manager"
 localstats = memory_manager.monitor_memory()
 
 localmax_usage =0
 forname, statinpairs(stats)do
   localusage = (stat.capacity - stat.free_space) / stat.capacity
   ifusage > max_usagethen
      max_usage = usage
   end
 end
 
 return{
    healthy = max_usage 

五、高可用部署方案

5.1 多活架構

# docker-compose.yml
version:'3.8'

services:
# 網(wǎng)關節(jié)點1
gateway-1:
 image:openresty/openresty:alpine
 container_name:gateway-1
 volumes:
  -./conf/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf
  -./lua:/usr/local/openresty/lualib/gateway
  -./logs/gateway-1:/var/log/nginx
 ports:
  -"8080:80"
 environment:
  -GATEWAY_NODE_ID=node-1
  -REDIS_HOST=redis
  -CONSUL_HOST=consul
 depends_on:
  -redis
  -consul
 networks:
  -gateway-network
 deploy:
  resources:
   limits:
    cpus:'2'
    memory:2G
   reservations:
    cpus:'1'
    memory:1G

# 網(wǎng)關節(jié)點2
gateway-2:
 image:openresty/openresty:alpine
 container_name:gateway-2
 volumes:
  -./conf/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf
  -./lua:/usr/local/openresty/lualib/gateway
  -./logs/gateway-2:/var/log/nginx
 ports:
  -"8081:80"
 environment:
  -GATEWAY_NODE_ID=node-2
  -REDIS_HOST=redis
  -CONSUL_HOST=consul
 depends_on:
  -redis
  -consul
 networks:
  -gateway-network
 deploy:
  resources:
   limits:
    cpus:'2'
    memory:2G
    
# Keepalived + HAProxy實現(xiàn)高可用
haproxy:
 image:haproxy:2.4-alpine
 container_name:haproxy
 volumes:
  -./conf/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
 ports:
  -"80:80"
  -"443:443"
  -"8404:8404" # Stats頁面
 depends_on:
  -gateway-1
  -gateway-2
 networks:
  -gateway-network
  
# Redis集群
redis:
 image:redis:6-alpine
 container_name:redis
 command:redis-server--appendonlyyes
 volumes:
  -redis-data:/data
 ports:
  -"6379:6379"
 networks:
  -gateway-network
  
# Consul服務發(fā)現(xiàn)
consul:
 image:consul:1.10
 container_name:consul
 command:agent-server-bootstrap-expect=1-ui-client=0.0.0.0
 ports:
  -"8500:8500"
  -"8600:8600/udp"
 networks:
  -gateway-network
  
# Prometheus監(jiān)控
prometheus:
 image:prom/prometheus:latest
 container_name:prometheus
 volumes:
  -./conf/prometheus.yml:/etc/prometheus/prometheus.yml
  -prometheus-data:/prometheus
 ports:
  -"9090:9090"
 command:
  -'--config.file=/etc/prometheus/prometheus.yml'
  -'--storage.tsdb.path=/prometheus'
 networks:
  -gateway-network
  
# Grafana可視化
grafana:
 image:grafana/grafana:latest
 container_name:grafana
 volumes:
  -grafana-data:/var/lib/grafana
  -./conf/grafana/dashboards:/etc/grafana/provisioning/dashboards
  -./conf/grafana/datasources:/etc/grafana/provisioning/datasources
 ports:
  -"3000:3000"
 environment:
  -GF_SECURITY_ADMIN_PASSWORD=admin
 networks:
  -gateway-network

networks:
gateway-network:
 driver:bridge

volumes:
redis-data:
prometheus-data:
grafana-data:

5.2 灰度發(fā)布

-- gateway/canary.lua
local_M = {}

-- 灰度發(fā)布策略
function_M.route_canary()
 localuri = ngx.var.uri
 localheaders = ngx.req.get_headers()
 
 -- 策略1:基于Header的灰度
 ifheaders["X-Canary"] =="true"then
   return_M.get_canary_upstream()
 end
 
 -- 策略2:基于Cookie的灰度
 localcookie_canary = ngx.var.cookie_canary
 ifcookie_canary =="true"then
   return_M.get_canary_upstream()
 end
 
 -- 策略3:基于用戶ID的灰度
 localuser = ngx.ctx.user
 ifuserand_M.is_canary_user(user.user_id)then
   return_M.get_canary_upstream()
 end
 
 -- 策略4:基于流量百分比的灰度
 localcanary_percentage = _M.get_canary_percentage(uri)
 ifcanary_percentage >0then
   localrandom=math.random(100)
   ifrandom<= canary_percentage?then
? ? ? ? ? ??return?_M.get_canary_upstream()
? ? ? ??end
? ??end
? ??
? ??-- 默認返回穩(wěn)定版本
? ??return?_M.get_stable_upstream()
end

-- 判斷是否為灰度用戶
function?_M.is_canary_user(user_id)
? ??local?canary_users = ngx.shared.canary_cache:get("canary_users")
? ??
? ??if?not?canary_users?then
? ? ? ??return?false
? ??end
? ??
? ? canary_users = cjson.decode(canary_users)
? ??
? ??-- 支持用戶列表
? ??for?_, id?in?ipairs(canary_users)?do
? ? ? ??if?id == user_id?then
? ? ? ? ? ??return?true
? ? ? ??end
? ??end
? ??
? ??-- 支持用戶ID范圍
? ??local?user_id_num =?tonumber(user_id)
? ??if?user_id_num?and?user_id_num %?100?

六、實戰(zhàn)案例與性能數(shù)據(jù)

6.1 性能測試結果

在我們的生產環(huán)境中,經過優(yōu)化后的網(wǎng)關性能數(shù)據(jù):

指標 數(shù)值 測試條件
QPS 100,000+ 8核16G單節(jié)點
P99延遲 < 10ms 不包含業(yè)務處理時間
P95延遲 < 5ms 不包含業(yè)務處理時間
CPU使用率 40-60% 高峰期
內存使用 2-4GB 含緩存
連接數(shù) 50,000+ 并發(fā)連接

6.2 故障處理案例

Case 1: 下游服務雪崩

? 問題:某個核心服務故障,導致大量請求堆積

? 解決:熔斷器自動開啟,返回降級響應,避免故障擴散

? 效果:整體可用性保持99.9%

Case 2: DDoS攻擊

? 問題:遭受每秒百萬級請求攻擊

? 解決:多層限流+IP黑名單自動封禁

? 效果:業(yè)務完全無感知

七、踩坑總結與最佳實踐

7.1 踩過的坑

1.Nginx reload導致連接斷開

? 問題:配置更新需要reload,導致連接中斷

? 解決:使用動態(tài)配置,避免reload

2.內存泄漏問題

? 問題:Lua腳本內存泄漏

? 解決:正確使用連接池,及時清理變量

3.DNS解析緩存

? 問題:上游服務IP變更后無法及時感知

? 解決:配置resolver和合理的DNS緩存時間

7.2 最佳實踐建議

1.漸進式改造:不要一次性改造所有功能,分階段實施

2.充分測試:壓測、故障演練必不可少

3.監(jiān)控先行:完善的監(jiān)控是穩(wěn)定性的基礎

4.文檔完善:維護詳細的運維文檔和故障處理手冊

5.定期演練:定期進行故障演練,驗證高可用方案

總結

基于Nginx構建微服務網(wǎng)關是一個系統(tǒng)工程,需要在架構設計、功能實現(xiàn)、性能優(yōu)化、高可用等多個方面深入思考和實踐。本文分享的方案和代碼都經過生產環(huán)境驗證,希望能給大家一些參考。

網(wǎng)關作為微服務架構的咽喉要道,其重要性不言而喻。一個設計良好的網(wǎng)關不僅能提供統(tǒng)一的入口,還能大幅簡化微服務的復雜度,提升整體系統(tǒng)的可維護性。

聲明:本文內容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權轉載。文章觀點僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場。文章及其配圖僅供工程師學習之用,如有內容侵權或者其他違規(guī)問題,請聯(lián)系本站處理。 舉報投訴
  • 網(wǎng)關
    +關注

    關注

    9

    文章

    6164

    瀏覽量

    54730
  • nginx
    +關注

    關注

    0

    文章

    180

    瀏覽量

    12857
  • 微服務
    +關注

    關注

    0

    文章

    147

    瀏覽量

    7952

原文標題:基于Nginx的微服務網(wǎng)關實現(xiàn)與最佳實踐:從零打造企業(yè)級API Gateway

文章出處:【微信號:magedu-Linux,微信公眾號:馬哥Linux運維】歡迎添加關注!文章轉載請注明出處。

收藏 人收藏
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

    評論

    相關推薦
    熱點推薦

    基于STM32F的智能家居服務網(wǎng)關設計

    隨著物聯(lián)網(wǎng)技術的飛速發(fā)展,將傳統(tǒng)的Internet與新型的無線傳感器網(wǎng)絡整合的趨勢越來越明顯,嵌入式服務網(wǎng)關既是無線傳感器網(wǎng)絡的協(xié)調器網(wǎng)關,又是遠程WEB的服務器,它實現(xiàn)兩個不同協(xié)議的網(wǎng)絡之間的通信。
    發(fā)表于 09-17 18:23 ?7712次閱讀
    基于STM32F的智能家居<b class='flag-5'>服務網(wǎng)關</b>設計

    微服務網(wǎng)關gateway的相關資料推薦

    目錄微服務網(wǎng)關 gateway 概述[路由器網(wǎng)關 Zuul 概述]嵌入式 Zuul 反向代理微服務網(wǎng)關 gateway 概述1、想象一下一個購物應用程序的產品詳情頁面展示了指定商品的信息:2、若是
    發(fā)表于 12-23 08:19

    性能提升1倍,成本直降50%!基于龍蜥指令加速的下一代云原生網(wǎng)關

    ;業(yè)務網(wǎng)關提供獨立業(yè)務域級別的、與后端業(yè)務緊耦合策略配置,隨著應用架構模式從單體演進到現(xiàn)在的分布式微服務,業(yè)務網(wǎng)關也有了新的叫法 - 微服務網(wǎng)關。(圖 6/傳統(tǒng)
    發(fā)表于 08-31 10:46

    面向數(shù)控設備的WEB服務網(wǎng)關

    在SOA 的工業(yè)自動化控制中, 需要將數(shù)控設備所提供RS232 或RS422/ RS485 串行通信接口等現(xiàn)場總線,映射為相應的制造WEB 服務。數(shù)控設備WEB 服務網(wǎng)關可將現(xiàn)場總線設備連接到互聯(lián)網(wǎng)并發(fā)
    發(fā)表于 08-05 11:33 ?17次下載

    面向數(shù)控設備的WEB服務網(wǎng)關

    在SOA 的工業(yè)自動化控制中, 需要將數(shù)控設備所提供RS232 或RS422/ RS485 串行通信接口等現(xiàn)場總線,映射為相應的制造WEB 服務。數(shù)控設備WEB 服務網(wǎng)關可將現(xiàn)場總線設備連接到互聯(lián)網(wǎng)
    發(fā)表于 10-13 17:53 ?31次下載

    面向數(shù)控設備的WEB服務網(wǎng)關

    在SOA的工業(yè)自動化控制中, 需要將數(shù)控設備所提供RS232 或RS422/ RS485 串行通信接口等現(xiàn)場總線,映射為相應的制造WEB服務。數(shù)控設備WEB服務網(wǎng)關可將現(xiàn)場總線設備連接到互聯(lián)網(wǎng)并發(fā)布
    發(fā)表于 07-13 15:39 ?11次下載

    基于社交網(wǎng)絡和關聯(lián)數(shù)據(jù)的服務網(wǎng)構建方法

    網(wǎng)絡中可用服務的急劇增加對面向服務計算技術的發(fā)展起到了極大的推動作用。針對服務的規(guī)模和利用率遠沒有達到預期,以及服務之間交互關系的復雜性問題,提出基于社交網(wǎng)絡和關聯(lián)數(shù)據(jù)的
    發(fā)表于 12-06 13:50 ?0次下載
    基于社交網(wǎng)絡和關聯(lián)數(shù)據(jù)的<b class='flag-5'>服務網(wǎng)</b>絡<b class='flag-5'>構建</b>方法

    SOA架構和微服務架構的主要區(qū)別

    SOA和微服務架構一個層面的東西,而對于ESB和微服務網(wǎng)關是一個層面的東西,一個談到是架構風格和方法,一個談的是實現(xiàn)工具或組件。SOA架構和微服務架構有什么區(qū)別?
    的頭像 發(fā)表于 05-04 14:11 ?6251次閱讀
    SOA架構和<b class='flag-5'>微服務</b>架構的主要區(qū)別

    使用FastAPI構建機器學習微服務

    使用微服務架構部署應用程序有幾個優(yōu)點:更容易進行主系統(tǒng)集成、更簡單的測試和可重用的代碼組件。 FastAPI 最近已成為 Python 中用于開發(fā)微服務的最流行的 web 框架之一。 FastAPI
    的頭像 發(fā)表于 10-10 16:44 ?2797次閱讀
    使用FastAPI<b class='flag-5'>構建</b>機器學習<b class='flag-5'>微服務</b>

    Spring Cloud Gateway服務網(wǎng)關的部署與使用詳細教程

    一、為什么需要服務網(wǎng)關: 1、什么是服務網(wǎng)關: 2、服務網(wǎng)關的基本功能: 3、流量網(wǎng)關服務網(wǎng)關的區(qū)別: 二、
    的頭像 發(fā)表于 10-11 17:46 ?2613次閱讀

    基于Traefik自研的微服務網(wǎng)關

    數(shù)據(jù)平面主要功能是接入用戶的HTTP請求和微服務被拆分后的聚合。使用微服務網(wǎng)關統(tǒng)一對外暴露后端服務的API和契約,路由和過濾功能正是網(wǎng)關的核心能力模塊。另外,
    的頭像 發(fā)表于 04-16 11:08 ?3578次閱讀

    5種主流API網(wǎng)關技術選型

    微服務近幾年非常火,圍繞微服務的技術生態(tài)也比較多,比如微服務網(wǎng)關、Docker、Kubernetes等。
    的頭像 發(fā)表于 04-17 10:45 ?1834次閱讀

    Spring Cloud :打造可擴展的微服務網(wǎng)關

    Spring Cloud Gateway是一個基于Spring Framework 5和Project Reactor的反應式編程模型的微服務網(wǎng)關。它提供了豐富的功能,包括動態(tài)路由、請求限流、集成安全性等,使其成為構建微服務架構
    的頭像 發(fā)表于 10-22 10:03 ?875次閱讀
    Spring Cloud :打造可擴展的<b class='flag-5'>微服務網(wǎng)關</b>

    如何構建彈性、高可用的微服務?

    基于微服務的應用程序可實現(xiàn)戰(zhàn)略性數(shù)字轉型和云遷移計劃,對于開發(fā)團隊來說,這種架構十分重要。那么,如何來構建彈性、高可用的微服務呢?RedisEnterprise給出了一個完美的方案。文況速覽
    的頭像 發(fā)表于 11-26 08:06 ?853次閱讀
    如何<b class='flag-5'>構建</b>彈性、高可用的<b class='flag-5'>微服務</b>?

    服務網(wǎng)格DPU卸載解決方案

    服務網(wǎng)格(Service Mesh)是微服務架構中的一種重要技術,它主要處理服務之間的通信,為服務間的信息交換提供更安全、更快速且更可靠的基礎設施層。
    的頭像 發(fā)表于 09-20 16:25 ?958次閱讀
    <b class='flag-5'>服務網(wǎng)</b>格DPU卸載解決方案