This version (2017/05/27 13:44) is a draft.
Approvals: 0/1

[10:43:37] <aesteve> hi everyone !

[10:47:33] <aesteve> temporalfox: I think I found something that could be annoying in some very specific use cases. This sounds normal to me, but maybe someone could ask the question

[10:47:52] <temporalfox> hi

[10:47:58] <temporalfox> in what ?

[10:48:20] <aesteve> https://gist.github.com/aesteve/3e85e976624fd7f89fcbcf4e8e199f1e

[10:49:01] <aesteve> there's nothing wrong, but I was a bit surprised when letting my IDE reformatting my code

[10:51:11] <aesteve> that's the usual thing that “route()” actually creates a new route everytime it's invoked. but wrapped as a lambda that sounds weird

[10:56:17] <temporalfox> why doesnt it work ?

[10:56:29] <aesteve> refresh the gist please

[10:56:35] <temporalfox> it will be a single one ?

[10:56:37] <aesteve> i think you'll understand

[10:56:56] <temporalfox> the the same instance of Route

[10:57:04] <temporalfox> that's just how java lambda are

[10:57:18] <aesteve> yeah totally

[10:57:46] <aesteve> that's just that route() has actually a side-effect (from day 0 of vertx-web)

[10:58:12] <aesteve> and even IDEs / code analyzers are confused

[10:58:40] <temporalfox> ah yes I see, they prpose to simplify it

[10:58:49] <aesteve> anyway I guess I'll submit a bug to Sonar in that case

[10:59:12] <aesteve> because it doesn't have to assume that a method is “pure”

[10:59:34] <aesteve> in fact the proposed refactor would work if and only if “route” is a pure function

[11:07:58] <aesteve> I'll see what they think about it. Maybe I just misunderstood Sonar's refactoring advice. Did you have a chance to look at AsyncUtils Julien ?

[11:19:45] <temporalfox> yes a little

[11:19:49] <temporalfox> I spotted the chain

[11:20:01] <temporalfox> and Clement also wants to add such composition

[11:20:11] <temporalfox> look at the Devoxx hands on

[11:20:14] <temporalfox> there is a Chain class

[11:20:19] <temporalfox> that does something similar

[11:21:03] <aesteve> ah ok ! I needed it a few times. But sometimes I wondered if that's not a smell of “you should start using vertx-rx”

[11:22:11] <temporalfox> that's debatable

[11:22:17] <temporalfox> there are indeed non goals for vertx composition

[11:22:34] <temporalfox> the goal is not to be a functionnal reactive style stuff

[11:22:48] <temporalfox> rather handle common use case people are facing and don'twant to use rx for

[11:23:53] <aesteve> idd

[11:24:28] <aesteve> that's quite hard to know when you just need composition and not a full Observable thing

[11:25:26] <aesteve> but Devoxx inspired me a lot on this matter. I think with Nubes I should start supporting observables. That'd be a very clean way to support async methods declared in a synchronous style

[11:26:11] <aesteve> @GET(“/foo/bar) public Observable<JsonObject> getBarAsJson() {}

[11:27:25] <aesteve> for now nubes assumes every method declaring RoutingContext as parameter might be asynchronous (and thus lets the user call “next”)

[11:29:50] <aesteve> is http2 still in a separate branch or have you merged it yet ?

[11:33:18] <temporalfox> aesteve it is merged now

[11:33:46] <aesteve> ok I'll see if my small showcase still works and if I notice some improvements

[11:36:38] <aesteve> ah yes I remember, I was trying to emulate latency on server-side but this wasn't the way to doit

[11:48:49] * ChanServ sets mode: +o temporalfox [11:49:34] <aesteve> mmh still not seeing any difference as localhost [11:49:46] <aesteve> I'll try to build it on AWS just to be sure [11:49:57] <aesteve> run it * [12:05:28] <temporalfox> yeah [12:05:34] <temporalfox> but such use case is not really realistic [12:05:44] <temporalfox> and as I said before there is one difference : it uses a single connection [12:05:49] <temporalfox> instead of 5 [12:05:58] <temporalfox> perhaps you can restrict your browser to use a single connection ? [12:11:44] <AlexLehm> aesteve: there is a test server for http2 from the golang project that tries to show the difference between http1 and http2 with a tiled image consisting of 100 images, maybe you can use that [12:12:16] <aesteve> temporalfox: I'll see what I can do [12:12:34] <aesteve> AlexLehm: yeah that's the example I'm based on [12:12:37] <AlexLehm> http://http2.golang.org/gophertiles [12:12:50] <AlexLehm> ok [12:13:00] <aesteve> https://github.com/aesteve/http2-showcase [12:13:56] <aesteve> I am so not familiar with openshift :\ [12:14:14] <temporalfox> you can make it work with a fatjar [12:14:22] <temporalfox> tha's what I did when I wrote nonobot [12:14:42] <aesteve> It's bound to a repo, so I can add a hook, right ? so that it creates the fatJar then run it everytime I push ? [12:15:59] <temporalfox> no, you have your own openshift github repo [12:16:21] <temporalfox> in which you copy a fatjar [12:16:28] <aesteve> yeah ok I did write it wrong [12:16:28] <temporalfox> then you push this to openshift [12:16:45] <aesteve> I have put everything to the same repo so that it's easier [12:16:58] <aesteve> the repo has 2 origins (github + openshift) [12:17:26] <aesteve> and I was wondering if as an action_hook I could also add the build, or if I had to build manually ? [12:17:58] <aesteve> mmhh actually the fatJar is in the build dir so won't be pushed anyway… nevermind. [12:28:34] <AlexLehm> temporalfox: I just noticed that there is an option to check ssl host in netclient now, that will substantially simplify the stuff i implemented in mail-client [13:47:19] <aesteve> I definitely don't understand http2 :D [13:48:16] <aesteve> I can't see any difference, and don't understand how to tell the browser to use a single connection [13:48:33] <aesteve> (and I have no idea how to create a realistic use case) [13:51:25] <aesteve> I'll let the example as is temporalfox if you ever want to use it, or change it, but I definitely don't understand what I should do. Sorry :( [14:25:02] * ChanServ sets mode: +o temporal_

[18:57:14] * ChanServ sets mode: +o temporalfox [21:51:33] <temporalfox> aesteve hi again [21:53:37] <aesteve> hi again temporalfox [21:53:45] <temporalfox> looking at the example with http/[unknown:eacute] [21:53:46] <temporalfox> 2 [21:54:21] <aesteve> let me see, I can't remember in which state I pushed it [21:54:38] <temporalfox> do you use push promise in the example ? [21:54:42] <temporalfox> I don't remmeber [21:54:56] <aesteve> I don't think so [21:55:07] <temporalfox> ok [21:55:16] <temporalfox> I'm going to see if the http2 example we see online uses it or not [21:55:27] <temporalfox> do we have source code for this example ? [21:57:21] <aesteve> https://go.googlesource.com/net/+/master/http2/h2demo/h2demo.go [21:57:52] <aesteve> and mine's here : https://github.com/aesteve/http2-showcase [21:58:38] <temporalfox> ok [22:09:38] <temporalfox> I'm going to look at the chrome networking tab [22:09:45] <temporalfox> to compare http/2 http/1.1 in both cases [22:10:04] <temporalfox> I think one difference is that [22:10:10] <temporalfox> in case of the example online [22:10:25] <temporalfox> is the TFTB [22:10:32] <temporalfox> the Time To First Byte for each image [22:10:48] <temporalfox> TFTB == time between when the browser sent the request and got the first byte [22:11:02] <temporalfox> TFTB == 140 ms for each image [22:11:06] <temporalfox> in HTTP/1.1 [22:11:17] <temporalfox> there are 5 simultaneous connections [22:11:25] <temporalfox> and 183 requests [22:11:45] <temporalfox> so that makes 182 / 5 * 140 [22:11:47] <temporalfox> approx [22:11:56] <temporalfox> because the five first request takes more time to connect [22:12:26] <temporalfox> also the 5 next takes also time because of I think TCP slow start [22:12:37] <temporalfox> then it's 140 ms per image [22:13:12] <temporalfox> 182 / 5 * 140 = 5 seconds [22:13:20] <temporalfox> and my browser tells me : 4.85 seconds [22:13:28] <temporalfox> so it's close [22:13:50] <temporalfox> so you can evalutate in this example the load time of the page dominated by this simple math [22:14:47] <temporalfox> the content download itself is very small [22:15:35] <temporalfox> once it get the first byte [22:16:06] <temporalfox> well we don't care actually [22:16:17] <temporalfox> what we care is the latency between the request send and the last byte received [22:16:24] <temporalfox> now let's look at HTTP/2 [22:16:39] <temporalfox> it tells me 860ms [22:16:58] <temporalfox> it looks like there are two set of requests [22:17:12] <temporalfox> that's what the browser does [22:18:48] <temporalfox> the page is obtained in 140 ms (like other resources) and the browser can quickly start to get resources [22:19:31] <temporalfox> so it starts to download images at 200ms [22:19:47] <temporalfox> the first batch [22:19:56] <temporalfox> all in [22:20:10] <temporalfox> the max of these images is 627ms [22:20:24] <temporalfox> but meanwhile the second batch begins [22:20:35] <temporalfox> at around 300ms [22:20:56] <temporalfox> and the max is around 565ms [22:21:11] <temporalfox> so the page load time is dominated by 300 + 560 ms [22:21:23] <temporalfox> and the browser tells me load == 860ms [22:21:56] <temporalfox> now I will compare that with your example [22:22:23] <temporalfox> I'm wondering if we can export the timeline of chrome for further reference [22:23:09] <temporalfox> but well now I have an idea of how it happens [22:23:56] <temporalfox> one noticeable diff with http/1.1 is that the latency for each resource is higher [22:24:03] <temporalfox> 140ms versus 500ms [22:24:22] <temporalfox> but 1/2 of the resources can be obtained in [22:24:44] <temporalfox> now let's look at Vert.x 2 example [22:24:56] <temporalfox> maybe I can cheat and add pause in server to simulate latency [22:25:00] <temporalfox> otherwise we will ahve zero latency [22:25:15] <temporalfox> so we should do that in the server for both http/1.1 and http/2 [22:25:37] <temporalfox> I think the latency parameter in the online demo does soemthing similar [22:25:56] <temporalfox> but when we set zero it actually means zero + your network latency [22:26:11] <temporalfox> so in case of local example it should be equals to the network latency [22:26:14] <aesteve> I tried to in a previous version [22:26:15] <temporalfox> of host it on the same server [22:26:28] <temporalfox> or host it on the same server [22:26:30] <aesteve> to make executeBlocking / sleep to simulate latency [22:26:34] <temporalfox> no need [22:26:37] <temporalfox> just a timer ? [22:26:54] <aesteve> ah yes, good idea [22:27:31] <temporalfox> damn gradle :-) [22:27:42] <temporalfox> btw are you convinced by java 9 modules :-) ? [22:29:07] <aesteve> idk :s I kinda like the idea of exporting exactly what I want to be exported [22:29:34] <aesteve> but I'm not convinced by the way of doing it [22:30:10] <aesteve> I liked Cedric's talk though, as usual, very clear and informative [22:30:49] <temporalfox> what is not clear is that java 9 would not solve the problem of running a jvm with two different versions of a jar [22:30:56] <temporalfox> it looks like it does not [22:31:10] <aesteve> yeah I didn't understand that [22:31:22] <aesteve> R[unknown:eacute]mi's talk started with exactly that example [22:31:40] <aesteve> but then… didn't show how it was solved (or I missed the point completely) [22:32:26] <temporalfox> it's not solved I think [22:33:10] <temporalfox> how do you run the server in your example ? [22:33:15] <aesteve> gradle run [22:33:17] <temporalfox> ah it's gradle [22:33:18] <temporalfox> ok [22:33:36] <temporalfox> so now that I understand the loading behavior of chrome in the first example I can compare [22:34:15] <aesteve> my network tab with the go example looks pretty clear, batches of requests as you said [22:34:35] <aesteve> but with my example it's actually a christmas tree [22:34:49] <temporalfox> we're here to find out :-) [22:34:56] <temporalfox> maybe it's your HTML ? [22:34:59] <temporalfox> or you reuse the same HTML ? [22:35:13] <aesteve> same thing, <p><img /> [22:35:32] <aesteve> the only difference is the images and the latency [22:36:57] <temporalfox> your see differnces because there is no such latency as like before [22:37:14] <temporalfox> in other http/2 example it is 500ms t oget something [22:37:20] <temporalfox> so the small differences are not visible [22:37:38] <temporalfox> in your example we can also see 2 batch of requests [22:37:45] <temporalfox> hwoever they are not properly aligned [22:37:52] <aesteve> that's what I thought, that's why I started looking at openshift + aws-east [22:37:56] <temporalfox> they are interleaved [22:38:06] <aesteve> here goes the christmas tree [22:38:09] <temporalfox> I'm going to set a timer [22:38:16] <temporalfox> of 140ms [22:38:22] <temporalfox> on each resource [22:38:29] <temporalfox> how can we do that with vertx web ? [22:38:35] <temporalfox> I add an handler before in the router ? [22:38:44] <aesteve> you can [22:38:58] <aesteve> I call that “instrumented assets” [22:39:10] <temporalfox> what should I use ? [22:39:48] <temporalfox> here I can see you use [22:39:56] <temporalfox> router.getWithRegex [22:39:59] <temporalfox> router.get(”/assets/*“) [22:40:07] <temporalfox> what can I use to add 140ms timer on the request ? [22:40:25] <aesteve> I have pushed some code [22:40:26] <temporalfox> router.route() ? [22:40:37] <aesteve> this should work, just tell me [22:41:18] <temporalfox> we need also a timer on the template [22:41:29] <temporalfox> on anything [22:41:31] <temporalfox> it should be global [22:41:47] <aesteve> ok, so same code on router.route().handler [22:41:58] <temporalfox> can you push it ? [22:42:15] <temporalfox> did you check the example ? [22:42:19] <temporalfox> it does not work anymore for me [22:42:31] <temporalfox> and I hsould get image.hbs right ? [22:42:42] <temporalfox> ah I changed actually the port on my example [22:42:45] <temporalfox> to 8443 [22:42:50] <temporalfox> so I need to go back to previous [22:43:22] <aesteve> I pushed the code [22:43:23] <temporalfox> so there I have load time == 1 second [22:43:40] <temporalfox> and 637 ms when I reload [22:43:45] <temporalfox> because connection is already open I think [22:43:50] <temporalfox> let's try http/1 [22:44:05] <temporalfox> so with http/1 [22:44:09] <temporalfox> it's 6.43 s [22:44:23] <temporalfox> and 5.71 s when I reload [22:44:28] <temporalfox> with the same connections [22:44:33] <temporalfox> so that's definitely comparable [22:44:36] <temporalfox> imho [22:44:41] <temporalfox> can you try it too ? [22:45:03] <aesteve> I'll try too, just finishing my yoghourt first [22:45:06] <temporalfox> :-) [22:45:41] <temporalfox> so we should make this example with the latency parameter [22:45:49] <temporalfox> like in the other http/2 demo [22:46:15] <temporalfox> and locally we should use something equals to the expected server latency to have something comparable [22:46:50] <temporalfox> can you do that afterward ? [22:47:07] <aesteve> gosh, the difference is stunning [22:47:11] <aesteve> :o [22:48:10] <aesteve> yes I can, if host == localhost ⇒ latency == 100ms + latencyQueryParam, else latency == latencyQueryParam [22:48:24] <temporalfox> no just set a latency param [22:48:36] <aesteve> ok, as you wish [22:48:59] <temporalfox> and put in readme that to have a real example one should use a latency equals to the base network latency [22:49:07] <temporalfox> and add links in the top [22:49:08] <temporalfox> in the tmeplate [22:49:22] <temporalfox> with http/1.1 and http/2 example with different latencies [22:49:30] <temporalfox> like in the online demo [22:49:34] <aesteve> alright, I'll do that. I'll also try to push it somewhere on the cloud [22:49:50] <temporalfox> yes that would be a realist example [22:50:00] <aesteve> maybe with aws + docker [22:50:23] <aesteve> I've been struggling with openshift today and I can't listen on two different ports [22:50:25] <temporalfox> yes rather aws than openshift [22:50:39] <temporalfox> and I'm not sure openshift allows http/2 protocol [22:50:54] <aesteve> I'll ask Clement if I have an issue with Docker [22:51:23] <temporalfox> yes he's the man for docker [22:51:24] <aesteve> I'll need to package the alpn jar within the docker container, too [22:51:48] <temporalfox> you can try to use OpenSSL now [22:51:58] <temporalfox> as we do support OpenSSL in vertx master [22:52:11] <temporalfox> otherwise there is alpn agent [22:52:31] <temporalfox> note that alpn jar needs to be on bootclasspath [22:52:33] <aesteve> OK I keep that in mind, I'll open issues on Github so that you can track progress [22:53:23] <temporalfox> where ? [22:53:28] <temporalfox> in the project ? [22:53:47] <temporalfox> going to watch it [22:55:56] <aesteve> I'll handle the latency + Readme + links stuff tonight. I'll keep Docker for another evening, too tired tonight [22:56:11] <temporalfox> n/P you have already done a lot :-) [22:56:12] <temporalfox> thanks [23:03:48] <aesteve> 3.3 is pushed to a snapshot repository ? [23:05:32] <aesteve> so that anyone can build the project without needing to checkout/install vertx [23:30:53] * ChanServ sets mode: +o temporalfox

[23:39:54] *** ChanServ sets mode: +o temporal_