Differences

This shows you the differences between two versions of the page.

Link to this comparison view

irc:1453330800 [2017/05/27 13:44] (current)
Line 1: Line 1:
 +[11:02:04] <​aesteve>​ hi everyone :)
 +
 +[11:12:42] <​temporal_>​ aesteve ​ hi Arnaud
 +
 +[11:13:03] <​temporal_>​ the future stuff is merged in master, have you had chance to give a look ?
 +
 +[11:13:14] <​temporal_>​ I mean besides the feedback you did few days ago
 +
 +[11:17:08] <​aesteve>​ no I didn't have a look, sorry
 +
 +[11:18:05] <​aesteve>​ I have to first think about how to wrap it into my "batch executor"​ thingy
 +
 +[11:18:19] <​aesteve>​ not wrap it, use it, actually
 +
 +[11:18:51] <​aesteve>​ I have a question about vertx-unit though if you have a minute
 +
 +[11:40:06] <​temporal_>​ yes
 +
 +[11:40:14] <​temporal_>​ aesteve yes
 +
 +[12:03:08] <​aesteve>​ sorry, I was away for a sec
 +
 +[12:03:19] <​aesteve>​ the question is re. exception handling
 +
 +[12:04:21] <​aesteve>​ let's say you want to test eventbus sockjs bridge or smth like that
 +
 +[12:04:51] <​aesteve>​ you'll send a msg over the eventbus, and your test will be ws.handler {  /* checkMsg */ async.complete() }
 +
 +[12:05:43] <​aesteve>​ if context.fail() is called within checkMsg, no problem at all, test is failing, etc.
 +
 +[12:06:42] <​aesteve>​ if an exception occurs during checkMsg... async.complete() is obviously not called, but... If you receive another message then you're screwed
 +
 +[12:06:54] <​aesteve>​ your test can pass then, and the first exception was swalloed
 +
 +[12:09:28] <​aesteve>​ so I was wondering, if there was any way to deal with that issue temporal_
 +
 +[12:10:06] <​aesteve>​ basically saying : "if an exception occurs within any callback, that's not normal, the test should stop immediately"​
 +
 +[12:10:39] <​temporal_>​ ah you mean you fail after the test complete ?
 +
 +[12:10:46] <​temporal_>​ it fails after you called complete()
 +
 +[12:12:58] <​aesteve>​ that's not what I saw, I'll try to push a reproducer
 +
 +[12:13:30] <​temporal_>​ ok that would be cool :-)
 +
 +[12:13:54] <​temporal_>​ ahhhhhh
 +
 +[12:14:00] <​temporal_>​ I see what you mean
 +
 +[12:14:12] <​temporal_>​ like an error that is happening on the event loop
 +
 +[12:14:15] <​temporal_>​ yeah...
 +
 +[12:14:22] <​temporal_>​ that's identified
 +
 +[12:14:22] <​aesteve>​ exactly
 +
 +[12:14:40] <​temporal_>​ I've done an issue in vertx core https://​github.com/​eclipse/​vert.x/​issues/​1216
 +
 +[12:14:41] <​aesteve>​ do you have an issue # I can track ?
 +
 +[12:14:47] <​aesteve>​ you're reading my mind
 +
 +[12:14:52] <​temporal_>​ to have a way to be aware of what is happening in event loop
 +
 +[12:15:10] <​temporal_>​ so this could be used in tests to know errors thrown by event loop
 +
 +[12:15:12] <​temporal_>​ (or context)
 +
 +[12:15:27] <​temporal_>​ and basically it would also allow to avoid to have a TestContext
 +
 +[12:15:35] <​temporal_>​ passed
 +
 +[12:16:03] <​temporal_>​ that's an improvmeent I would like for 3.3 maybe
 +
 +[12:16:13] <​temporal_>​ it would be a good win
 +
 +[12:16:16] <​temporal_>​ in usability
 +
 +[12:16:27] <​aesteve>​ absolutely
 +
 +[12:16:55] <​aesteve>​ in fact it happened to a coworker and I understood why it was happening, but not how to explain it to him
 +
 +[12:17:00] <​temporal_>​ so idea is to have something to allow that
 +
 +[12:17:03] <​temporal_>​ and have this the simplest possible in core
 +
 +[12:17:42] <​temporal_>​ and it would allow to use junit assertions
 +
 +[12:17:44] <​temporal_>​ I htink
 +
 +[12:17:54] <​temporal_>​ because those just throw assertion errors
 +
 +[12:18:07] <​aesteve>​ idd
 +
 +[12:18:07] <​temporal_>​ so they would be reported
 +
 +[12:18:14] <​temporal_>​ and the test fails
 +
 +[12:18:32] <​aesteve>​ careful with the stacktrace though, but I don't think we would lose info
 +
 +[12:18:39] <​temporal_>​ no we would not
 +
 +[12:19:58] <​aesteve>​ thanks for the pointer Julien !
 +
 +[12:24:18] <​temporal_>​ you're welcome
 +
 +[12:57:46] *** ChanServ sets mode: +o temporalfox
 +
 +[14:40:46] <​vertxfan>​ if I run multiple instances of a verticle, should I expect them to run concurrently ?
 +
 +[14:41:02] <​vertxfan>​ I mean, do I need to synchronize access to class variables in the verticle code ?
 +
 +[14:41:56] <​vertxfan>​ (I use DeploymentOptions.setInstances() to define the number of instances)
 +
 +[14:45:31] <​temporalfox>​ vertxfan no, by default you don't need synchronization at all
 +
 +[14:45:47] <​temporalfox>​ what do you mean by class variables actually ?
 +
 +[14:45:58] <​temporalfox>​ you mean non static right ?
 +
 +[14:46:12] <​aesteve>​ mmh "class variable"​
 +
 +[14:46:18] <​vertxfan>​ yes, non static
 +
 +[14:46:29] <​aesteve>​ so : instance variable
 +
 +[14:46:31] <​vertxfan>​ yes
 +
 +[14:46:39] <​temporalfox>​ it is fine then, you will have X instances and each instance will always use the same thread
 +
 +[14:46:43] <​aesteve>​ no synchronize required indeed
 +
 +[14:47:09] <​vertxfan>​ got it, thanks
 +
 +[14:47:16] <​aesteve>​ put a log or println within your contructor, you'll see what happens ;)
 +
 +[15:41:07] <​voidDotClass>​ When I do the query: SELECT * from foo WHERE (name ILIKE '​foo%'​) ORDER BY name ASC LIMIT 10 OFFSET 0, that returns 10 results, one of which has the id: 52
 +
 +[15:41:17] <​voidDotClass>​ oops, wrong channel
 +
 +[15:41:52] <​aesteve>​ temporalfox:​ did you have a look at : http://​shipilev.net/​blog/​2016/​arrays-wisdom-ancients/​ ?
 +
 +[15:42:34] <​temporalfox>​ no, what is it ?
 +
 +[15:48:15] <​Ladicek>​ one of the more approachable Shipilev'​s articles on dark corners of JVM performance :-)
 +
 +[15:48:31] <​aesteve>​ looks like a benchmark on the Collection.toArray() method (which seems widely used in https://​github.com/​eclipse/​vert.x/​search?​utf8=%E2%9C%93&​q=toArray )
 +
 +[15:56:49] <​temporalfox>​ :-)
 +
 +[15:57:02] <​temporalfox>​ we haven'​t yet optimized fully vert.x yet it is already very fast!
 +
 +[15:57:17] <​mweb>​ hi, I'm ugrading to vert.x v3.2.0 and I try to make a unit test using the @RunWith(VertxUnitRunner.class) annotation. The test is successfull but logs some suspicious warnings like "​You'​re already on a Vert.x context, are you sure you want to create a new Vertx instance?"​. How can I fix this warning? Thanks
 +
 +[16:01:28] <​temporalfox>​ I don't remember where this warning is coming from
 +
 +[16:02:30] <​mweb>​ it's from the VertxImpl constructor
 +
 +[16:02:57] <​mweb>​ it occurs when Vertx.currentContext() is not null
 +
 +[16:03:46] <​aesteve>​ temporalfox:​ yes but it sounds like an optimization for free, no ?
 +
 +[16:04:05] <​temporalfox>​ yes clearly you should not do that
 +
 +[16:04:18] <​temporalfox>​ how is your test looking like ?
 +
 +[16:04:27] <​temporalfox>​ aesteve yes I agree
 +
 +[16:04:43] <​temporalfox>​ it would be good that sometones spends time write some micro benchmarking we can execute
 +
 +[16:04:59] <​mweb>​ can i post code here directly?
 +
 +[16:06:27] <​mweb>​ @RunWith(VertxUnitRunner.class) public class NoClusterMembersOneVerticleInstancesTests {      // with this number we simulate the different member count of a cluster ​    ​private final static int SIMULATED_CLUSTER_MEMBERS = 1;      private Vertx vertx; ​    ​private Logger log = LoggerFactory.getLogger(NoClusterMembersOneVerticleInstancesTests.class); ​     private List<​String>​ answers = new ArrayList<>​(); ​     @Before ​    ​public void befor
 +
 +[16:06:47] <​mweb>​ that did not work as expected :-)
 +
 +[16:09:04] <​temporalfox>​ make a gist please
 +
 +[16:09:45] <​mweb>​ yes I have to commit first
 +
 +[16:17:22] <​mweb>​ ok, my test class looks like this: https://​gist.github.com/​mcweba/​e78c62edffadb45d1c21
 +
 +[16:17:55] <​mweb>​ here the link to the git repo https://​github.com/​mcweba/​vertx-cluster-watchdog
 +
 +[16:21:32] <​aesteve>​ temporalfox:​ Tim pointed me at http://​openjdk.java.net/​projects/​code-tools/​jmh/​ for micro-benchmarking one day
 +
 +[16:21:49] <​temporalfox>​ yes it is the trendy thing for microbench :-)
 +
 +[16:23:50] <​aesteve>​ maybe you should create a repo under the vert.x organization with the basic mvn structure and a single (simple) benchmark within
 +
 +[16:24:15] <​aesteve>​ then make an announcement on the User group and people will start submitting PRs with new benchmarks ?
 +
 +[16:27:18] <​aesteve>​ mweb: which test is failing exacty ?
 +
 +[16:29:01] <​temporalfox>​ mweb there should be no vertx when the vertx is craeted
 +
 +[16:29:12] <​temporalfox>​ mweb it is this line vertx = Vertx.vertx();​ ?
 +
 +[16:29:15] <​temporalfox>​ 36
 +
 +[16:29:20] <​temporalfox>​ that prints this statement ?
 +
 +[16:29:33] <​aesteve>​ I do this all the time temporalfox
 +
 +[16:29:47] <​temporalfox>​ the only way there could be already a context is to use the RunOnContextRule
 +
 +[16:29:50] <​mweb>​ the fun thing is, that the 3 tests are not failing everytime but I also get the warnings when they succeed. I get the "​...are you sure to create a new instance..."​ message and I also get "​Thread Thread[vert.x-eventloop-thread-3,​5,​main] has been blocked for 2980 ms, time limit is 2000"
 +
 +[16:30:07] <​temporalfox>​ can you post a stack trace of this logging ?
 +
 +[16:30:15] <​temporalfox>​ i.e configure your logger to show stack
 +
 +[16:30:29] <​temporalfox>​ or make a new Exception().printStackTrace()
 +
 +[16:30:58] <​aesteve>​ https://​github.com/​aesteve/​nubes/​blob/​master/​src/​test/​java/​integration/​VertxNubesTestBase.java#​L28 ; https://​github.com/​aesteve/​grooveex/​blob/​master/​src/​test/​groovy/​com/​github/​aesteve/​vertx/​groovy/​specs/​TestBase.groovy#​L27
 +
 +[16:31:02] <​aesteve>​ never seen any warning
 +
 +[16:31:12] <​mweb>​ https://​gist.github.com/​mcweba/​e0801a89a0cade960e4c
 +
 +[16:31:43] <​mweb>​ I made a gist with the stacktrace
 +
 +[16:32:21] <​aesteve>​ mmmh
 +
 +[16:32:27] <​aesteve>​ here your test is just failing :  java.lang.AssertionError:​ Not equals : CONSISTENT != NO_RESULT
 +
 +[16:33:15] <​aesteve>​ the status message is "​CONSISTENT"​ but you're expecting "​NO_RESULT"​ https://​gist.github.com/​mcweba/​e78c62edffadb45d1c21#​file-noclustermembersoneverticleinstancestests-java-L71
 +
 +[16:33:17] <​mweb>​ thats true but the warnings occur also when the test succeeds
 +
 +[16:33:46] <​aesteve>​ WARNUNG: Thread Thread[vert.x-eventloop-thread-3,​5,​main] has been blocked for 2980 ms, time limit is 2000
 +
 +[16:33:53] <​aesteve>​ this means you're blocking the event loop
 +
 +[16:34:11] <​mweb>​ and how do I block the event loop?
 +
 +[16:35:19] <​temporalfox>​ you could lower the blocking detector and have a stack trace printed
 +
 +[16:35:29] <​temporalfox>​ sometimes the event loop can be blocked in unit tests though
 +
 +[16:35:34] <​temporalfox>​ and it is necessarily a problem
 +
 +[16:35:38] <​temporalfox>​ by the test itself
 +
 +[16:35:57] <​temporalfox>​ like you wait for a countdown on eventloop and are sure that it will be counted down or fail
 +
 +[16:36:21] <​mweb>​ so this could be a problem in the unit test only?
 +
 +[16:37:28] <​aesteve>​ we need the stacktrace to actually know, here we just know you're blocking the eventloop, but when it's blocked longer, you get a full stacktrace
 +
 +[16:37:53] <​aesteve>​ hence temporalfox asking you to lower the blocking detector delay
 +
 +[16:38:16] <​mweb>​ how do I lower the blocking detector delay?
 +
 +[16:38:22] <​temporalfox>​ I think it's system property
 +
 +[16:38:48] <​temporalfox>​ maxEventLoopExecuteTime
 +
 +[16:38:50] <​temporalfox>​ option
 +
 +[16:39:27] <​temporalfox>​ actually
 +
 +[16:39:28] <​temporalfox>​ warningExceptionTime
 +
 +[16:39:29] <​temporalfox>​ in options
 +
 +[16:47:38] <​mweb>​ ok I made the changes. see gist https://​gist.github.com/​mcweba/​dca94664e17263106004
 +
 +[16:48:11] <​mweb>​ it's strange that the logs says time limit is 0
 +
 +[16:50:17] <​mweb>​ when I run the build with drone.io I don't get the "​blocking eventloop"​ messages: https://​drone.io/​github.com/​mcweba/​vertx-cluster-watchdog/​6
 +
 +[17:01:11] <​aesteve>​ at li.chee.vertx.cluster.ClusterWatchdog.start(ClusterWatchdog.java:​60)
 +
 +[17:05:52] <​mweb>​ that's just the creation of ClusterWatchdogHttpHandler. Isn't that allowed there?
 +
 +[17:06:54] <​temporalfox>​ didn't know drone.io
 +
 +[17:07:32] <​mweb>​ drone.io is just like codeship.com but with unlimited builds for public repositories :-)
 +
 +[17:08:34] <​temporalfox>​ so you start a new vertx
 +
 +[17:08:36] <​temporalfox>​ in your verticle
 +
 +[17:08:42] <​temporalfox>​ that's where the problem is
 +
 +[17:08:50] <​temporalfox> ​        at li.chee.vertx.cluster.ClusterWatchdogHttpHandler.<​init>​(ClusterWatchdogHttpHandler.java:​15)
 +
 +[17:08:50] <​temporalfox> ​        at li.chee.vertx.cluster.ClusterWatchdog.start(ClusterWatchdog.java:​60)
 +
 +[17:08:51] <​temporalfox> ​        at io.vertx.core.AbstractVerticle.start(AbstractVerticle.java:​111)
 +
 +[17:08:57] <​aesteve>​ https://​github.com/​mcweba/​vertx-cluster-watchdog/​blob/​master/​src/​main/​java/​li/​chee/​vertx/​cluster/​ClusterWatchdogHttpHandler.java#​L15
 +
 +[17:08:59] <​temporalfox>​ I mean you can do that if you like
 +
 +[17:09:02] <​temporalfox>​ :-)
 +
 +[17:09:03] <​aesteve>​ that sounds wrong
 +
 +[17:09:13] <​temporalfox>​ but normally it would be just a "​main"​
 +
 +[17:09:21] <​temporalfox>​ that starts vertx with clustered option
 +
 +[17:09:27] <​temporalfox>​ and deploy the rest
 +
 +[17:09:33] <​mweb>​ so I should pass the vertx into the ClusterWatchdogHttpHandler?​
 +
 +[17:09:56] <​temporalfox>​ why not make ClusterWatchdogHttpHandler a verticle ?
 +
 +[17:10:02] <​temporalfox>​ or wrap it with a verticle
 +
 +[17:10:29] <​temporalfox>​ but this watchdog what does it do ?
 +
 +[17:10:34] <​temporalfox>​ it starts new vertx instances ?
 +
 +[17:12:10] <​temporalfox>​ I see
 +
 +[17:12:10] <​temporalfox> ​    ​Logger log;
 +
 +[17:12:11] <​temporalfox> ​    ​Router router = Router.router(Vertx.vertx());​
 +
 +[17:12:16] <​temporalfox>​ you should pass it the current vertx
 +
 +[17:12:33] <​temporalfox>​ in ClusterWatchdog
 +
 +[17:12:42] <​temporalfox>​ you have
 +
 +[17:12:42] <​temporalfox>​ new ClusterWatchdogHttpHandler(log,​ resultQueueLength);​
 +
 +[17:12:49] <​temporalfox>​ add the current vertx of the verticle
 +
 +[17:12:57] <​temporalfox>​ instead of creating one in the handler
 +
 +[17:13:23] <​mweb>​ I guess the problem is the Vertx.vertx() in ClusterWatchdogHttpHandler. With vert.x 2.1.2 we used the RouterMatcher and didn't have to use a vertx instance
 +
 +[17:13:36] <​temporalfox>​ also I think that
 +
 +[17:13:38] <​temporalfox>​ vertx.createHttpServer().requestHandler(clusterWatchdogHttpHandler).listen(port);​
 +
 +[17:13:45] <​temporalfox>​ you should create the hgttp server in the verticle
 +
 +[17:13:50] <​temporalfox>​ and when it is started
 +
 +[17:14:02] <​temporalfox>​ and you set on it directly the handler
 +
 +[17:14:12] <​temporalfox>​ forget the "and when it is started"​
 +
 +[17:14:30] <​temporalfox>​ so the verticle takes care of starting the web server, set the handler on it
 +
 +[17:14:37] <​temporalfox>​ and the handler does not really care of the webserver
 +
 +[17:14:47] <​temporalfox>​ anyway :-) like you prefer
 +
 +[17:15:18] <​mweb>​ the verticle is ClusterWatchdog class
 +
 +[17:15:44] <​mweb>​ the ClusterWatchdogHttpHandler has no reference to the httpserver
 +
 +[17:18:10] <​mweb>​ I'm passing the vertx instance into the ClusterWatchdogHttpHandler and the warnings are gone :-) thank you
 +
 +[17:20:35] <​temporalfox>​ you're welcome!
 +
 +[17:22:25] <​mweb>​ I don't get your other advice of starting the http server in the verticle and the set it directly on the handler
 +
 +[19:21:15] *** ChanServ sets mode: +o temporalfox