Differences

This shows you the differences between two versions of the page.

Link to this comparison view

irc:1465336800 [2017/05/27 13:44] (current)
Line 1: Line 1:
 +[02:37:39] <​gastaldi>​ hey
 +
 +[02:37:55] <​gastaldi>​ temporalfox,​ when is the next Vert.x release coming up?
 +
 +[07:51:51] <​temporalfox>​ gastaldi|away there is a milestone this week and around 22/06 there will be the final
 +
 +[08:06:46] <​gastaldi>​ temporalfox:​ excellent. Thank you
 +
 +[08:08:46] <​gastaldi>​ cescoffier: for some strange reason I can't seem to have a valid dependency in maven for the jar you created in https://​github.com/​vert-x3/​vertx-jca/​issues/​7
 +
 +[08:09:06] <​gastaldi>​ Maybe because the packaging in the pom is rar?
 +
 +[08:14:09] <​cescoffier>​ gastaldi : did you try with <​type>​jar</​type>​ in your dependency ? also with -U to refresh the snapshots ?
 +
 +[08:15:06] <​gastaldi>​ Yup, I did. Maybe I am missing something
 +
 +[08:15:25] <​cescoffier>​ let me try on a simple pom
 +
 +[08:16:51] <​cescoffier>​ @gastaldi This works for me:
 +
 +[08:16:54] <​cescoffier> ​    <​dependency>​
 +
 +[08:16:54] <​cescoffier> ​      <​groupId>​io.vertx</​groupId>​
 +
 +[08:16:55] <​cescoffier> ​      <​artifactId>​vertx-jca-adapter</​artifactId>​
 +
 +[08:16:55] <​cescoffier> ​      <​type>​jar</​type>​
 +
 +[08:16:57] <​cescoffier> ​    </​dependency>​
 +
 +[08:17:20] <​cescoffier>​ with this, if I launch `mvn dependency:​copy-dependencies`,​ I've the jar file
 +
 +[08:17:44] <​gastaldi>​ Right. I'll try again later. Thank you
 +
 +[08:18:22] <​gastaldi>​ Maybe I was  missing the snapshots repository declaration
 +
 +[08:18:59] <​cescoffier>​ oh, yes it's only available on 3.3.0-SNAPSHOT
 +
 +[08:19:03] <​cescoffier>​ the release is "on it's way"
 +
 +[08:19:12] <​gastaldi>​ Sweet
 +
 +[08:19:12] <​cescoffier>​ (but the way is a bit long ;-))"
 +
 +[08:19:19] <​gastaldi>​ :(
 +
 +[08:19:28] <​cescoffier>​ we are going to create a first milestone tomorrow
 +
 +[08:19:37] <​cescoffier>​ (yes, tomorrow is Thursday, so tomorrow)
 +
 +[08:19:47] <​cescoffier>​ then the official release would follow
 +
 +[08:19:57] <​gastaldi>​ Cool
 +
 +[08:20:06] <​gastaldi>​ 3.0.0.Alpha1?​
 +
 +[08:20:20] <​gastaldi>​ 3.3.0*
 +
 +[08:20:48] <​cescoffier>​ 3.3.0-m1
 +
 +[08:20:54] <​gastaldi>​ Nice
 +
 +[08:21:00] <​cescoffier>​ I will ping you when it would be there
 +
 +[08:21:10] <​gastaldi>​ Ok I think I can wait until then
 +
 +[08:21:22] <​cescoffier>​ (we have a fairly decent test suite to execute on 3 OS before bing there ;-))
 +
 +[08:21:45] <​gastaldi>​ Great, I appreciatr that :)
 +
 +[08:21:53] <​gastaldi>​ *e
 +
 +[08:23:57] <​gastaldi>​ Ok back to sleep now. Morning will rise in a few hours now ;)
 +
 +[09:57:10] *** ChanServ sets mode: +o temporalfox
 +
 +[10:54:19] <mcw> hi there, is this statement correct: "when i have more cpu cores than deployed verticles, each verticle is running in its own core". I'm asking because I tried to separate heavy RedisClient usage in a dedicated verticle to not affect the "​main"​ verticle
 +
 +[11:09:26] <​temporalfox>​ mcw it can be true but it's not guaranteed
 +
 +[11:09:43] <​tsegismont>​ mcw, it's the job of the kernel to ensure cpu is best used
 +
 +[11:09:58] <​temporalfox>​ tsegismont yes but there is the underlying vertx thread model
 +
 +[11:10:11] <​temporalfox>​ so it's best to have two different threads and dedicate one to a verticle
 +
 +[11:10:26] <​temporalfox>​ so the CPU will be able to affect a core to a thread
 +
 +[11:10:38] <​temporalfox>​ I mean the kernel / jvm
 +
 +[11:11:23] <​tsegismont>​ right, but vertx/jvm can't do anything other than assigning a specific thread
 +
 +[11:11:35] <​tsegismont>​ that's what I meant
 +
 +[11:11:53] <​tsegismont>​ anyway, mcw has the answer :)
 +
 +[11:16:07] <mcw> thank you. so deploying one verticle with heavy cpu usage does effect another verticle?
 +
 +[11:20:42] <​temporalfox>​ mcw it is hard to tell, however dong a separate deployment is a good idea and offers to vertx / jvm more opportunity for optimizing
 +
 +[11:20:58] <​temporalfox>​ then the JVM is a complicated beast anyway :)
 +
 +[11:21:21] <​temporalfox>​ one thing you can do is deploy your Redis verticle more than the other verticles
 +
 +[11:21:27] <​temporalfox>​ or the inverse
 +
 +[11:21:58] <​temporalfox>​ also I thin it depends what you mean by "​heavy"​
 +
 +[11:22:01] <​temporalfox>​ I don't think that RedisClient needs much CPU
 +
 +[11:22:39] <​temporalfox>​ mcw have you done measurements ?
 +
 +[11:24:04] <​Sticky>​ I have had issues in the past with one very busy verticle blocking others on the same thread in the pool
 +
 +[11:24:24] <​Sticky>​ I solved it by putting that verticle onto a worker
 +
 +[11:26:10] <mcw> I have the following usecase: I have a verticle starting a httpserver for "​normal"​ requests which also go to redis to load data for example. one "​feature"​ is an analysis of the redis data wich "​loops"​ down the redis-key-structure
 +
 +[11:26:52] <mcw> when this analysis task is runnig, the other requests become very slow
 +
 +[11:27:12] <​Sticky>​ is that loop blocking?
 +
 +[11:27:25] <mcw> so the idea was to separate the analysis-task to its own verticle
 +
 +[11:28:03] <mcw> jep it's blocking because of the loop over the keys
 +
 +[11:28:20] <​Sticky>​ yeah, put that blocking activity onto a worker
 +
 +[11:29:10] <mcw> putting it onto a worker would result in having this code in a separate verticle and deploy this verticle with "​setWorker(true)"​ ?
 +
 +[11:30:46] <​Sticky>​ yes, there is also vertx.executeBlocking,​ but I dont know exactly what that does
 +
 +[11:31:57] <​tsegismont>​ mcw, if you verticle does blocking calls to redis, you must deploy it as a worker verticle
 +
 +[11:32:29] <​tsegismont>​ mcw, then vert.x will peel a thread from the worker pool to execute the verticle code
 +
 +[11:33:22] <​tsegismont>​ mcw, and that will definitely not be an event loop thread, so if your http server is a standard verticle, both verticles are guaranteed to run in different threads
 +
 +[11:33:42] <​tsegismont>​ mcw, and then it's the role of the kernel to ensure cpu resources are best used
 +
 +[11:34:05] <mcw> ok, thank you. I'll try this
 +
 +[11:46:08] <​temporalfox>​ is redis using execute blocking ?
 +
 +[11:46:16] <​temporalfox>​ I though it was using plain TCP
 +
 +[13:27:42] <​pmlopes>​ @temporalfox redis is 100% async no execute blocking code exists in the codebase
 +
 +[13:27:58] <​temporalfox>​ yes that's what I thought
 +
 +[13:27:58] <​temporalfox>​ I remember that
 +
 +[13:27:59] <​pmlopes>​ and indeed the redis protocol is TCP based
 +
 +[13:28:37] <​temporalfox>​ so it should not heavy in processing
 +
 +[13:29:48] <​pmlopes>​ no, the protocol is also trivial, it is a TLV encoding style, first byte define the type, then a number define the lenght and after the payload
 +
 +[13:29:49] <​temporalfox>​ mcw you said "when this analysis task is runnig, the other requests become very slow"
 +
 +[13:29:58] <​temporalfox>​ do you mean http requests ?
 +
 +[13:30:43] <​pmlopes>​ @mcw did you run your code under a profiler so you can pinpoint the bottleneck?
 +
 +[13:30:58] <​temporalfox>​ yes I'm surprised that redis would slow down something
 +
 +[13:31:15] <​temporalfox>​ he said that there is a loop
 +
 +[13:31:34] <​temporalfox>​ one question is : do we handle backpressure in redis :-) ?
 +
 +[13:31:58] <​temporalfox>​ I think normally it should not apply
 +
 +[13:32:16] <​pmlopes>​ in a way yes all requests are pushed to a queue and then pushed as fast as possible over the socket to the redis server
 +
 +[13:32:37] <​pmlopes>​ it is kind of a prod/​consumer implementation
 +
 +[13:33:20] <​pmlopes>​ this works fine because redis supports pipelining so we can push request after request even though replies have not been received
 +
 +[13:36:23] <​pmlopes>​ when @mcw says that it loops redis key values, does it mean that say it uses the keys command to retreive the list of keys and then for each is does a get? in that case i guess there will be alot of IO
 +
 +[13:36:41] <​pmlopes>​ it is the same as the N+1 orm's problem
 +
 +[13:38:43] <​pmlopes>​ not knowning anything about the domain problem I'd say iif the IO is the problem then investigate the sorted sets API from redis since it already can perform lots of aggregations,​ etc...
 +
 +[13:56:09] <mcw> sorry for the delay (lunch). I could fix the problem by deploying the "​analysis"​ feature as a worker verticle
 +
 +[14:26:50] <​pmlopes>​ making it a worker would not bring any benefit to redis client since it is just another thread
 +
 +[15:17:27] <​tsegismont>​ pmlopes|Zzzz,​ maybe mcw does not use the vert.x redis client?
 +
 +[20:44:53] <amr> whats the purpose of AuthHandler addAuthority and AaddAuthorities?​
 +
 +[20:44:53] <amr> dont see it in the docs
 +
 +[20:47:06] <amr> oh, the required permission?
 +
 +[20:48:14] <amr> yea, looks like it
 +
 +[20:48:30] <amr> weird terminology regarding that, seen it as authorities and permissions