Differences

This shows you the differences between two versions of the page.

Link to this comparison view

irc:1479855600 [2017/05/27 13:44] (current)
Line 1: Line 1:
 +[09:43:49] *** ChanServ sets mode: +o purplefox
 +
 +[11:58:12] <​ppatiern>​ Hi guys, I'm developing some unit tests for a Vert.x based application and in these tests I need to use an external library (which in my case is the Eclipse Paho client for MQTT) which has some methods with callbacks that are executed in their own threads. What's the best practice in this case ? When the callback is called, the runOnContext method should be used for having code executed in the event loop ?
 +
 +[13:08:35] <​gihad_>​ I was just checking the mailing list for framework-benchmarks,​ my own vertx benchmarking code also spits out the same messages when under a lot of stress: "​SEVERE:​ java.io.IOException:​ Connection reset by peer", it seems that they disqualified vertx from the list due to that
 +
 +[13:10:30] <​gihad_>​ I benchmarked my app using the wrk tool, with -t16 -n400 parameters, my code is here: https://​github.com/​gihad/​vertx_leaderboard
 +
 +[13:13:04] <​gihad_>​ Although the error seems to be pointing at the client, some of my other equivalent implementations didn't spit out any errors under the same conditions
 +
 +[13:23:21] <​gihad_>​ Aside from that can anyone check my deployment options in Application.java and the way I'm using the 2 verticles in https://​github.com/​gihad/​vertx_leaderboard ? I'm getting good performance out of this code but I'm not sure I'm using Vert.x right. For instance I need to set instances to 1000 (LeaderboardVerticle) to get really good performance.
 +
 +[13:39:01] <​gihad_>​ Another pattern I noticed is that every subsequent benchmark gets slower: http://​pasted.co/​3cbd0444/​fullscreen.php?​hash=5db145c26884c0e178120db065e676a0&​toolbar=true&​linenum=false
 +
 +[13:40:07] <​gihad_>​ (each run causes several "​SEVERE:​ java.io.IOException:​ Connection reset by peer" messages in the vert.x server
 +
 +[15:33:33] *** ChanServ sets mode: +o purplefox
 +
 +[16:24:02] <​temporalfox>​ gihad this is expected because the client (wrk) closes the connection abruptly
 +
 +[16:24:22] <​temporalfox>​ they haven'​t replied to the thread
 +
 +[16:24:30] <​temporalfox>​ and we are still ignoring what happened
 +
 +[16:25:56] <​temporalfox>​ ppatiern runOnContext is used to get back to the Context threading model
 +
 +[16:26:04] <​temporalfox>​ if that's an event loop it will be that same thread
 +
 +[16:27:21] <​gihad>​ temporalfox:​ This doesn'​t happen with other stacks (e.g.: blocking code with Java Threads running in Tomcat) under the same amount of stress. Is Vert.x just more verbose with the messages?
 +
 +[16:28:01] <​temporalfox>​ it happens becaue of the HttpServer that does not set an exception handler on the HttpServerRequest or HttpServerResponse
 +
 +[16:28:11] <​temporalfox>​ try modify the benchmark
 +
 +[16:28:17] <​temporalfox>​ and set an empty exception handler on both
 +
 +[16:30:08] <​ppatiern>​ temporalfox:​ thanks ! it's exactly what I thought to do :-)
 +
 +[16:31:20] <​gihad>​ temporalfox:​ There is a also a degradation of performance every time I run the benchamark and it spits out those messages, the first run is always faster. That could just be my implementation tho
 +
 +[16:32:05] <​temporalfox>​ we can figure our
 +
 +[16:32:06] <​temporalfox>​ out
 +
 +[16:32:17] <​temporalfox>​ it depends on lot of factors
 +
 +[16:32:23] <​temporalfox>​ how long do you run it ?
 +
 +[16:32:29] <​gihad>​ 10 seconds
 +
 +[16:32:42] <​gihad>​ The code is here: https://​github.com/​gihad/​vertx_leaderboard
 +
 +[16:33:01] <​gihad>​ Even tho I get good performance,​ I'm pretty sure my usage of the verticles is wrong
 +
 +[16:33:19] <​temporalfox>​ why don't you reuse the code of techempower ?
 +
 +[16:33:48] <​gihad>​ I'm testing vertx and a few other concurrency models for usage where I work
 +
 +[16:34:07] <​temporalfox>​ can you try this : https://​github.com/​vert-x3/​vertx-perf
 +
 +[16:34:17] <​temporalfox>​ it has optimizations
 +
 +[16:36:16] <​gihad> ​ Yeah, I can run it in a few hours
 +
 +[16:36:32] <​gihad>​ If you could check the way I'm using the verticles in my code it would be helpful too
 +
 +[16:36:46] <​gihad>​ I'm worried because I need to set instances to 1000 of the LeaderboardVerticle
 +
 +[16:36:55] <​temporalfox>​ I sincerely don't have time
 +
 +[16:37:08] <​temporalfox>​ going to paris tomorrow
 +
 +[16:37:10] <​temporalfox>​ many things to check
 +
 +[16:39:57] <​temporalfox>​ 1000 instances ?
 +
 +[16:39:57] <​temporalfox>​ why ?
 +
 +[16:40:14] <​temporalfox>​ usually you need 2 instances per core
 +
 +[17:09:06] <​myghty>​ hm I remember seeing some documentation about repeatedly performing an action every few seconds
 +
 +[17:09:16] <​myghty>​ Now I can't find it... anyone knows where it is?
 +
 +[17:10:03] <​myghty>​ a lol...
 +
 +[17:10:08] <​myghty>​ vert.x core docu ;)
 +
 +[17:51:47] <​temporalfox>​ myghty ?
 +
 +[17:51:54] <​temporalfox>​ ppatiern cool
 +
 +[17:54:45] <​ppatiern>​ temporalfox:​ I have the idea on the MQTT server to change some methods name so the API ... for example the I don't like the current writeConnack for sending CONNACK to the remote MQTT client passing the result code. I'd like to split in something like accept() and reject(code).
 +
 +[17:55:27] <​temporalfox>​ can you send an email to vertx-dev about this ?
 +
 +[17:55:38] <​ppatiern>​ temporalfox:​ ah ok !
 +
 +[18:06:12] <​gihad>​ temporalfox:​ I have a 2 verticles, one that has the redis async code and another that has the httpServer and the router which sends messages in the event bus to the redis async verticle
 +
 +[18:06:38] <​gihad>​ I deploy 1000 instances of the one that has the Http Server + Router to get max performance
 +
 +[18:07:00] <​temporalfox>​ try with 1
 +
 +[18:07:03] <​temporalfox>​ instead
 +
 +[18:07:26] <​temporalfox>​ 1 event loop can handle crazly load of messages
 +
 +[18:08:04] <​gihad>​ Yeah, I tried that and it handles a lot less QPS. My Worker Verticle with redis async is a singleton I believe, so there is only one of that
 +
 +[18:08:44] <​gihad>​ Would that be a problem?
 +
 +[18:12:29] <​gihad>​ Is the event loop defined by the verticle that has the http server?
 +
 +[18:15:40] <​gihad>​ Once I figure this out I was hoping to use this as a more complete example of using spring boot with Vert.x, the project in the Vert.x repo doesn'​t show much
 +
 +[18:16:16] <​myghty>​ temporalfox:​ I was looking for the timer
 +
 +[18:21:59] <​temporalfox>​ gihad try instance num == number of cores of your machine
 +
 +[18:22:20] <​temporalfox>​ do you have a .sh script with wrk gihad ?
 +
 +[18:22:25] <​temporalfox>​ so I can try it quickly
 +
 +[18:22:57] <​gihad>​ I use this command: wrk -t24 -c6000 -d10s '​http://​10.0.0.150:​8080/​rank/​get?​bottom=1000&​top=1010'​
 +
 +[18:23:10] <​gihad>​ on a C4.x4large Amazon instance
 +
 +[18:23:36] <​temporalfox>​ ah
 +
 +[18:23:46] <​temporalfox>​ C4.x4 means ?
 +
 +[18:23:57] <​gihad>​ 16 cores xeon server
 +
 +[18:24:38] <​temporalfox>​ set 32 instances
 +
 +[18:24:42] <​gihad>​ On the other side is the vertex app running on another server of the same size, with instances = 1200.
 +
 +[18:25:06] <​gihad>​ ok, I will tell you the numbers
 +
 +[18:27:23] <​gihad>​ instances = 32:  https://​snag.gy/​SlkEs3.jpg
 +
 +[18:28:08] <​gihad>​ instances = 1200: https://​snag.gy/​SWLwGC.jpg
 +
 +[18:54:22] <​ppatiern>​ temporalfox:​ I did it ... but ... I should wait for some comments or I can start with my changes ? :-)
 +
 +[21:01:09] <​myghty>​ Hi there
 +
 +[21:01:22] <​myghty>​ I am wondering... if the service discovery is solely a hazelcast map
 +
 +[21:01:36] <​myghty>​ I should be able to register a nice small spring service as well there, right?
 +
 +[21:19:25] <​myghty>​ btw the roadmap seems to be outdated... :/
 +
 +[21:28:42] <​myghty>​ ah found it
 +
 +[21:28:53] <​myghty>​ basically I just have to write to the sync Map I guess :)
 +
 +[21:39:04] <​xkr47>​ Hi, I'm using `vertx.createHttpServer(...)` and I'm wondering if it's possible to get a handle to the ChannelPipeline object.. I would like to add a custom logging handler first in the pipeline
 +
 +[21:39:26] <​myghty>​ xkr47: you can chain routers as well
 +
 +[21:40:08] <​myghty>​ xkr47: http://​vertx.io/​docs/​vertx-web/​js/#​_handling_requests_and_calling_the_next_handler
 +
 +[21:40:22] <​xkr47>​ ok but I want to track the conections too
 +
 +[21:40:53] <​xkr47>​ I tried to add a connectionHandler to the HttpServer, but it's only called once the first request headers have been parsed
 +
 +[21:41:16] <​xkr47>​ so if someone just opens a connection but doesn'​t send any requests then I get no notification
 +
 +[21:41:36] <​myghty>​ hm
 +
 +[21:42:37] <​myghty>​ I do not have vertx code checked out atm... but I am not quite sure this is at the moment possible
 +
 +[21:42:56] <​xkr47>​ ok
 +
 +[21:48:07] <​xkr47>​ bah, configureHttp2 is public but configureHttp1 is private :P
 +
 +[21:48:30] <​myghty>​ :D
 +
 +[21:48:37] <​myghty>​ open an issue ;)
 +
 +[21:48:57] <​xkr47>​ easier to do a pull request than explain the thing in english :D
 +
 +[21:49:18] <​xkr47>​ additionally I can compile the code and use immediately ;)
 +
 +[21:58:44] <​myghty>​ true!
 +
 +[21:59:39] <​xkr47>​ heh I could override getSslHelper() and return a fake implemenation that returns MyCustomLoggingHandler instead of SSLHandler :D
 +
 +[22:00:31] <​xkr47>​ I have a different reverse proxy vertx server in front so don't need ssl in this instance ^^