We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
使用numa 0 的0-23 cpu,cc为0.2m,keepalive=1ms,采取flood模式,单张Mellonax5-Ex百G网卡的场景下:
单卡单port,pps为135Mpps
单卡双port,pps也为135Mpps
原因分析:cc和keepalive没变,因此pps没变。因此,增加cc为原来的2倍,即0.4m,keepalive不变,由于增加了cc,因此增加cpu为原来的两倍,将numa 0的另一半cpu 48-71cpu使用上。
cpu= 0-23 48-71 cc为0.4m,keepalive=1ms,其他参数都没变,进行测试:
单卡双port,发端pps为126Mpps,但是接受端的pps仅为71Mpps,udp drop为60Mpps,udpTx仅为11Mpps。
问题:
The text was updated successfully, but these errors were encountered:
可能达到网卡瓶颈了 测最大PPS需要各种参数慢慢调,建议尝试 keepalive 10us tx_burst 64
Sorry, something went wrong.
是不是交换机上丢包了 可以两个port直连看看
No branches or pull requests
使用numa 0 的0-23 cpu,cc为0.2m,keepalive=1ms,采取flood模式,单张Mellonax5-Ex百G网卡的场景下:
单卡单port,pps为135Mpps
单卡双port,pps也为135Mpps
cpu= 0-23 48-71 cc为0.4m,keepalive=1ms,其他参数都没变,进行测试:
单卡双port,发端pps为126Mpps,但是接受端的pps仅为71Mpps,udp drop为60Mpps,udpTx仅为11Mpps。
问题:
The text was updated successfully, but these errors were encountered: