浏览器并发数量限制

一. 前言

事情是这样的, 为了测试fastapi的并发请求处理, 写了简单的前端脚本测试

{
    console.time('total');
    const urls = Array(4).fill('http://127.0.0.1:8000/test'),
    tasks = urls.map(url => fetch(url).then(r => r.json()).then(data => data));
    await Promise.allSettled(tasks);
    console.timeEnd('total');
}

http://127.0.0.1:8000/test, 在服务器端, 设置每个请求需要等待1秒.

from fastapi import FastAPI
import uvicorn
import asyncio
from fastapi.middleware.cors import CORSMiddleware

app = FastAPI()

ic = 0

# 配置允许的跨域来源
origins = [
    "http://192.168.2.10:3002",
]

# 添加 CORS 中间件
app.add_middleware(
    CORSMiddleware,
    allow_origins=origins,
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

@app.get("/test")
async def test():
    global ic
    ic += 1
    await asyncio.sleep(1)
    print(ic)
    return {"a": ic}

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)

诡异的事情发生了, 理论上应该消耗1秒多的, 但是实际却是消耗了4秒多....(⊙﹏⊙)

import aiohttp
import asyncio
import time

async def spider():
    semaphore = asyncio.Semaphore(4)
    # connector=aiohttp.TCPConnector(limit=4),
    async with aiohttp.ClientSession(
        timeout=aiohttp.ClientTimeout(total=30),

        ) as client:
        tasks = [fetch_url(client, 'http://127.0.0.1:8000/test', semaphore=semaphore) for _ in range(4)]
        return await asyncio.gather(*tasks)

async def fetch_url(client, url, semaphore):
    async with semaphore:
        try:
            r = await client.get(url)
            return await r.text()
        except aiohttp.http_exceptions as exc:
            print(f"Error fetching {url}: {exc}")
            return None

async def main():
    s = time.time()
    results = await spider()
    print(results)
    print(time.time() - s)

if __name__ == "__main__":
    asyncio.run(main())

换了python测试, 但是很正常, 就是消耗1秒多...

额,,,这? 前端的fetch并发都写烂...这问题出在哪?

二. 问题

img

Browser connection limitations | Documentation

Browsers limit the number of HTTP connections with the same domain name. This restriction is defined in the HTTP specification (RFC2616). Most modern browsers allow six connections per domain. Most older browsers allow only two connections per domain.

The HTTP 1.1 protocol states that single-user clients should not maintain more than two connections with any server or proxy. This is the reason for browser limits. For more information, see RFC 2616 – Hypertext Transfer Protocol, section 8 – Connections.

Modern browsers are less restrictive than this, allowing a larger number of connections. The RFC does not specify how to prevent the limit being exceeded. Either connections can be blocked from opening or existing connections can be closed.

Version Maximum connections
Internet Explorer® 7.0 2
Internet Explorer 8.0 and 9.0 6
Internet Explorer 10.0 8
Internet Explorer 11.0 13
Firefox® 6
Chrome™ 6
Safari® 6
Opera® 6
iOS® 6
Android™ 6

查了下, 浏览器对同一域名下的请求做了限制.

三. 小结

但是需要注意的是, 实际上还有更严格的限制, 是针对相同url的请求的, 相同的url是不能发起并发请求的, 每次都只能发出一个请求.

from fastapi import FastAPI
import uvicorn
import asyncio
from fastapi.middleware.cors import CORSMiddleware

app = FastAPI()

ic = 0

# 配置允许的跨域来源
origins = [
    "http://192.168.2.10:3002",
]

# 添加 CORS 中间件
app.add_middleware(
    CORSMiddleware,
    allow_origins=origins,
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

@app.get("/test/{index}")
async def test(index: int):
    global ic
    ic += 1
    print(index)
    await asyncio.sleep(1)
    # print(ic)
    return {"a": ic}

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)

稍微修后一下前后端

{
    console.time('total');
    const a = Array.from({ length: 24 }, (_, i) =>
        `http://127.0.0.1:8000/test/${i}`
    );

    const tasks = a.map(url => {
        return fetch(url)
            .then(r => r.json())
            .then(data => {
                // console.log(`收到响应: ${url} @ ${new Date().toISOString()}`);
                return data;
            });
    });

    await Promise.allSettled(tasks);
    console.timeEnd('total');
}
VM111:17 total: 4204.837890625 ms

可以看到, 最大的并发请求被限制在 6个, 同时发起24个并发请求.

四. one more thing

但是需要注意的是, 上述的并发请求限制, 只是针对http/1协议, 对于http/2协议网开一面, 并不受限制

img

(小红书已经启用http 2 / 3协议)

img

(百度采用http 1.1 / 2协议)

4.1 HTTP/2

fastapi 开启http/2协议

Security: It is typically used over TLS (Transport Layer Security), ensuring secure data transmission.

有个阻碍, 就是浏览器端http/2强制启用https.

这里采用的是自签名证书来实现, 需要mkcert

FiloSottile/mkcert: A simple zero-config tool to make locally trusted development certificates with any names you'd like.

将执行文件下载到本地

C:\Users\Lian>D:\Downloads\mkcert-v1.4.4-windows-amd64.exe
Usage of mkcert:

        $ mkcert -install
        Install the local CA in the system trust store.

        $ mkcert example.org
        Generate "example.org.pem" and "example.org-key.pem".

        $ mkcert example.com myapp.dev localhost 127.0.0.1 ::1
        Generate "example.com+4.pem" and "example.com+4-key.pem".

        $ mkcert "*.example.it"
        Generate "_wildcard.example.it.pem" and "_wildcard.example.it-key.pem".

        $ mkcert -uninstall
        Uninstall the local CA (but do not delete it).
C:\Users\Lian>D:\Downloads\mkcert-v1.4.4-windows-amd64.exe 127.0.0.1
Created a new local CA 💥
Note: the local CA is not installed in the system trust store.
Run "mkcert -install" for certificates to be trusted automatically ⚠️

Created a new certificate valid for the following names 📜
 - "127.0.0.1"

The certificate is at "./127.0.0.1.pem" and the key at "./127.0.0.1-key.pem"

127.0.0.1签发证书(不需要端口号)

img

会生成两个文件

此外, uvicorn尚未原生支持h2, [Uvicorn 入门指南: 安装, 配置与使用 – wiki基地 – wiki基地](https://wkbse.com/2025/03/23/uvicorn-入门指南: 安装, 配置与使用-wiki基地/)

img

这里使用: Hypercorn documentation - Hypercorn 0.17.3 documentation

Hypercorn is an ASGI web server based on the sans-io hyper, h11, h2, and wsproto libraries and inspired by Gunicorn. Hypercorn supports HTTP/1, HTTP/2, WebSockets (over HTTP/1 and HTTP/2), ASGI/2, and ASGI/3 specifications. Hypercorn can utilise asyncio, uvloop, or trio worker types.

hypercorn 1:app --keyfile ./127.0.0.1-key.pem --certfile ./127.0.0.1.pem

启动之后会发现, 证书不被信任.

C:\Users\Lian>D:\Downloads\mkcert-v1.4.4-windows-amd64.exe -install

The local CA is now installed in the system trust store! ⚡️

还需要添加证书到信任即可正常使用.

img

{
    console.time('total');
    const urls = Array.from({ length: 12 }, (_, i) => `https://127.0.0.1:8000/test/${i}`
    ),
    tasks = urls.map(url => fetch(url).then(r => r.json()).then(data => data));
    await Promise.allSettled(tasks);
    console.timeEnd('total');
}
VM68:7 total: 1069.323974609375 ms

{
    console.time('total');
    const urls = Array.from({ length: 24 }, (_, i) => `https://127.0.0.1:8000/test/${i}`
    ),
    tasks = urls.map(url => fetch(url).then(r => r.json()).then(data => data));
    await Promise.allSettled(tasks);
    console.timeEnd('total');
}
VM74:7 total: 1113.983154296875 ms

可以看到, 浏览器限制 6个并发请求的限制已经解除.

4.2 JavaScript fetch限制并发请求数量

JavaScript 原生实现并发限制的fetch, 增加了超时和重试请求.

采用递归的方式实现.

{
    class concurrentRequests {
        #urls = null;
        #responses = null;
        #limit = null;
        #retrys_limit = null;
        #request_count = null;
        #timeout = null;
        #configs = null;
        /**
         *
         * @param {Array[string]} urls
         * @param {number} limit
         * @param {number} retry_limit
         * @param {number} timeout
         * @param {object | null} configs
         */
        constructor(urls, limit = 4, retry_limit = 0, timeout = 3000, configs = null) {
            this.#urls = urls;
            this.#limit = limit;
            this.#retrys_limit = urls.reduce((obj, key) => { obj[key] = retry_limit; return obj; }, {});
            this.#request_count = 0;
            this.#timeout = timeout;
            this.#configs = configs || {};
            this.#responses = {};
        }
        handle_fetch() {
            return new Promise((resolve, _reject) => {
                const send_request = (url) => {
                    this.#request_count++;
                    const controller = new AbortController();
                    let id = setTimeout(() => {
                        id = null;
                        controller.abort();
                    }, this.#timeout);
                    fetch(url, {
                        ...this.#configs,
                        signal: controller.signal
                    })
                        .then(res => res.json())
                        .then(data => (this.#responses[url] = data))
                        .catch(e => console.error(e))
                        .finally(() => {
                            id && clearTimeout(id);
                            let nu = null;
                            if (!this.#responses[url]) {
                                if (this.#retrys_limit[url] > 0) {
                                    this.#retrys_limit[url]--;
                                    nu = url;
                                } else this.#responses[url] = 'failed to get response';
                            }
                            const cu = nu || this.#urls.pop();
                        	// 每个请求完成, 继续执行, 直到全部任务完成
                        	// 递归方式, 可以保证任务一直处于满状态运转, 而不是批量请求,需要等待其中的任务完成执行下一批请求.
                            cu ? send_request(cu) : resolve(this.#responses);
                        });
                };
                // 关键在于此, 控制并发的请求数量, 这里初始的请求, 剩余的请求由递归自完成, 可以保证每个任务连续执行
                while (this.#request_count < this.#limit) {
                    const url = this.#urls.pop();
                    if (!url) break;
                    send_request(url);
                };
            });
        }

    }

    console.time();
    const urls = Array.from({ length: 4 }, (_, i) =>
        `https://127.0.0.1:8000/test/${i}`
    );
    const cr = new concurrentRequests(urls, 2);
    const r = await cr.handle_fetch();
    console.log(r);
    console.timeEnd();
}

基本的逻辑, 先发起限制数量的并发请求, 然后每个请求都进行递归处理后续的请求, 这样可以保证任务的无缝连接执行.


但令人意外的是, 这样的需求应该很常见, 但是 ai 给出的方案均很差

img

(通义给出这种批量请求的方式, 并不是所预期的)