Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why grpc-java uds is slower than dns? #11763

Open
zhangchao1171 opened this issue Dec 19, 2024 Discussed in #11761 · 4 comments
Open

why grpc-java uds is slower than dns? #11763

zhangchao1171 opened this issue Dec 19, 2024 Discussed in #11761 · 4 comments

Comments

@zhangchao1171
Copy link

Discussed in #11761

Originally posted by zhangchao1171 December 18, 2024
Question:
We did a test to compare the performance between uds and dns. According to what said on the Internet, uds should be faster because it lacks many network layer protocols. However, the actual tested found that uds is at most the same as dns,in most cases The performance will be worse. Can anyone help explain ? The following are test cases and test results.

ENV:
jdk: jdk-17.0.10
grpc: 1.61.1
test machine: centos 7.9 & 64 cpu cores & 251 GB RAM

code detail
uds server core code:

 String UDS_FILE="/tmp/grpc_uds.socket";
 server = NettyServerBuilder.forAddress(new DomainSocketAddress(UDS_FILE))
                    .bossEventLoopGroup(new EpollEventLoopGroup())
                    .workerEventLoopGroup(new EpollEventLoopGroup())
                    .channelType(EpollServerDomainSocketChannel.class)
                    .maxInboundMessageSize(4*1024*1024)
                    .addService(BrSpringApplication.getBean(GreeterServiceImple.class))
                    .build().start();
  blockUtilShutDown();

uds client core code:

 String socketPath="/tmp/grpc_uds.socket";
 channel = NettyChannelBuilder.forAddress(new DomainSocketAddress(socketPath))
                    .eventLoopGroup(new EpollEventLoopGroup())
                    .channelType(EpollDomainSocketChannel.class)
                    .maxInboundMessageSize(4 * 1024 * 1024)
                    .usePlaintext().build();
GreeterGrpc.GreeterBlockingStub blockStub = GreeterGrpc.newBlockingStub(channel);
HelloRequest request = HelloRequest.newBuilder().setName(paramJson.toJSONString()).build();
        HelloReply response = getBlockStub().sayHello(request);

GreeterServiceImpl.java

@Override
    public void sayHello(HelloRequest request, StreamObserver<HelloReply> responseObserver) {
        long startTime = System.currentTimeMillis();
        logger.warn("request:{}", request.getName());
        try {
            if (Context.current().isCancelled()) {
                logger.warn("deadline cancalled,deadline:{}", Context.current().getDeadline());
                responseObserver.onError(new Exception("Cancelled by client"));
                return;
            }
            JSONObject msgRequest= JSONObject.parseObject(request.getName());
            int num=msgRequest.getIntValue("factor");
            String msg=msgRequest.getString("message");
            StringBuilder messageBack = new StringBuilder("rep-");
            for (int i = 0; i < num; i++) {
                messageBack.append(msg).append("|");
            }
//            logger.warn("response:{}", messageBack);
            HelloReply response = HelloReply.newBuilder().setMessage(messageBack + ",ms:" + (System.currentTimeMillis() - startTime)).build();
            responseObserver.onNext(response);
            responseObserver.onCompleted();

        } catch (BrException e) {
            logger.error("BrException,code:{},message:{}", e.getCode(), e.getMessage());
            HelloReply response = HelloReply.newBuilder().setMessage(e.getMessage()).build();
            responseObserver.onNext(response);
            responseObserver.onCompleted();
        } catch (Exception ex) {
            logger.error("sys error", ex);
            HelloReply response=HelloReply.newBuilder().setMessage(ex.getMessage()).build();
            responseObserver.onNext(response);
            responseObserver.onCompleted();
        }
    }

dns sever core code:

String port="80";
server = NettyServerBuilder.forPort(port)
                    .maxInboundMessageSize(4*1024*1024)
                    .addService(BrSpringApplication.getBean(GreeterServiceImple.class))
                    .build().start();
blockUtilShutDown();

dns client core code:

String ip="localhost:80"
channel = ManagedChannelBuilder.forTarget(ip).usePlaintext().build();
GreeterGrpc.GreeterBlockingStub blockStub = GreeterGrpc.newBlockingStub(channel);
HelloRequest request = HelloRequest.newBuilder().setName(paramJson.toJSONString()).build();
        HelloReply response=getBlockStub().sayHello(request);

test result:

client message size response message size thread totalnum type num total cost(ms) qps
12B 60B 50 5000000 dns 1 127359ms 5535.790199357721
12B 60B 50 5000000 dns 2 126051ms 5593.233722858208
12B 60B 50 5000000 dns 3 128828ms 5472.666687366101
12B 60B 50 5000000 uds 1 133212ms 5292.561510974987
12B 60B 50 5000000 uds 2 128315ms 5494.546265050852
12B 60B 50 5000000 uds 3 127177ms 5543.712337922738
12B 120B 50 5000000 dns 1 125919ms 5599.097070338868
12B 120B 50 5000000 dns 2 122034ms 5777.346509989019
12B 120B 50 5000000 dns 3 120439ms 5853.857172510566
12B 120B 50 5000000 uds 1 123538ms 5707.010830675582
12B 120B 50 5000000 uds 2 126988ms 5551.963209122122
12B 120B 50 5000000 uds 3 126450ms 5575.584847765916
12B 6kb 50 5000000 dns 1 123176ms 5723.783074624926
12B 6kb 50 5000000 dns 2 127242ms 5540.880401125414
12B 6kb 50 5000000 dns 3 121556ms 5800.06502352825
12B 6kb 50 5000000 uds 1 131939ms 5343.626251525327
12B 6kb 50 5000000 uds 2 133399ms 5285.1423473939085
12B 6kb 50 5000000 uds 3 130091ms 5419.534817935138
5kb 5kb 50 5000000 dns 1 133404ms 5284.9442595424425
5kb 5kb 50 5000000 dns 2 130532ms 5401.225017620201
5kb 5kb 50 5000000 dns 3 130145ms 5417.286134695916
5kb 5kb 50 5000000 uds 1 140809ms 5007.0144948121215
5kb 5kb 50 5000000 uds 2 140685ms 5011.427685965099
5kb 5kb 50 5000000 uds 3 140992ms 5000.515660463005

I expected grpc uds to be faster than dns , but it's not. all test cases are on the localhost machine , and i put the uds path to the ssd disk , but fount it's performance as same as hdd. I alse changed server and client memory from 2GB to 6 GB ,but it doesn't any improvement

@zhangchao1171
Copy link
Author

tested on grpc lastest version 1.69.0 and netty version 4.1.110.Final,it's same as 1.61.1 , grpc uds still slower than dns . and comparing the results,it seams uds on the 1.69.0 is slower than 1.61.0 . here are test result on grpc version on 1.69.0:

client message size response message size thread totalnum type num total cost(ms) qps
5kb 5kb 50 5000000 dns 1 133404ms 5284.9442595424425
5kb 5kb 50 5000000 dns 2 133904ms 5265.210180427769
5kb 5kb 50 5000000 dns 3 129208ms 5456.571605473346
5kb 5kb 50 5000000 uds 1 146765ms 4803.820420399959
5kb 5kb 50 5000000 uds 2 143096ms 4926.990999049589
5kb 5kb 50 5000000 uds 3 144900ms 4865.650131124914

@kannanjgithub
Copy link
Contributor

About DNS vs UDS speed, before blaming it on gRPC we need to see comparison results directly making connections to DNS resolved addresses vs UDS socket addresses, that show UDS is faster than DNS.

@zhangchao1171
Copy link
Author

zhangchao1171 commented Dec 19, 2024

maybe i will do a test using native libraries rather than gRPC, but I've observed that the new version is also slower than the old version.

@zhangchao1171
Copy link
Author

About DNS vs UDS speed, before blaming it on gRPC we need to see comparison results directly making connections to DNS resolved addresses vs UDS socket addresses, that show UDS is faster than DNS.

I use netty did a test about uds vs dns ,the result shows uds is faster than dns , expecially on the small message size . As message size bigger ,the performance will be worse . The test results are in line with our expectations. so why grpc test not same as netty and why grpc uds is slower than dns ,anybody has idea?

here is netty result:

client message size response message size thread totalnum type num total cost(ms) qps
12B 120b 50 5000000 dns 1 57310ms 12302.088710521724
12B 120b 50 5000000 dns 2 51310ms 13740.649074254532
12B 120b 50 5000000 dns 3 52102ms 13531.778127519097
12B 120b 50 5000000 uds 1 36824ms 19146.01086248099
12B 120b 50 5000000 uds 2 38930ms 18110.267248908298
12B 120b 50 5000000 uds 3 39312ms 17934.28734228734
12B 2.3kb 50 5000000 dns 1 131763ms 5350.763901854087
12B 2.3kb 50 5000000 dns 2 136208ms 5176.147539057912
12B 2.3kb 50 5000000 dns 3 128774ms 5474.961591625639
12B 2.3kb 50 5000000 uds 1 122744ms 5743.928045362706
12B 2.3kb 50 5000000 uds 2 124167ms 5678.10049368995
12B 2.3kb 50 5000000 uds 3 129416ms 5447.80169376275
12B 6kb 50 5000000 dns 1 283017ms 2491.1319956045045
12B 6kb 50 5000000 dns 2 279303ms 2524.2575410933646
12B 6kb 50 5000000 dns 3 286790ms 2458.3587433313573
12B 6kb 50 5000000 uds 1 269969ms 2611.5320796091405
12B 6kb 50 5000000 uds 2 272928ms 2583.21866572869
12B 6kb 50 5000000 uds 3 266057ms 2649.931044851291
5kb 5kb 50 5000000 dns 1 297589ms 2369.1490747305847
5kb 5kb 50 5000000 dns 2 265513ms 2655.3603928997827
5kb 5kb 50 5000000 dns 3 293901ms 2398.878207287488
5kb 5kb 50 5000000 uds 1 271206ms 2599.620598364343
5kb 5kb 50 5000000 uds 2 266808ms 2642.4721297712213
5kb 5kb 50 5000000 uds 3 263980ms 2670.780756117888

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants