This version (2017/05/27 13:44) is a draft.
Approvals: 0/1

[09:48:25] <ppatiern> temporal_: hi !

[10:44:20] <temporal_> hi ppatiern

[10:45:10] <ppatiern> temporal_: I have to fix an issue on vertx mqtt server … can I do that … we have to freeze that for the release ?

[10:50:12] <temporal_> any bugfix is welcome

[10:50:31] <ppatiern> ok ! I'll do that ;)

[10:59:03] <temporal_> what does it fix ?

[11:21:16] <temporal_> there is auqestion on vertx group about kafka sharing ppatiern

[11:21:28] <temporal_> I think we've never discussed it and need to address it :-)

[11:21:33] <ppatiern> let me check

[11:22:03] <temporal_> I think this is related to processing

[11:22:14] <temporal_> and i think one can share a producer

[11:22:17] <temporal_> but it does not make sense

[11:22:23] <temporal_> and one should not share a consumer

[11:22:35] <temporal_> because there is a single handler on it

[11:22:44] <temporal_> so overall they should not be shared

[11:23:12] <ppatiern> my current AMQP - Kafka bridge even if it doesn't use Vert.x Kafka client shares a producer

[11:23:17] <ppatiern> not the consumers of course

[11:23:32] <temporal_> well

[11:23:36] <temporal_> for producer it could share them

[11:23:46] <temporal_> ok

[11:23:55] <temporal_> so for producer could or not

[11:24:03] <temporal_> the difference is that you will have underlying kafka producer

[11:24:08] <temporal_> used diffrently

[11:24:17] <temporal_> consumer : should not since it assigns partitions

[11:26:40] <ppatiern> for the consumer side … it depends if you want that your application acts like a Kafka consumer so being assigned different partitions from the others or you just want receive through only one consumer (with all assigned partitions) but then delivery messages to the different apps

[11:26:54] <ppatiern> but I don't think that it can scale well

[11:27:01] <ppatiern> better having NOT shared consumers

[11:27:43] <temporal_> ppatiern on th consumer could it be possible to decouple the consumer akfka thread and the event loop

[11:27:55] <temporal_> and instead of having a single handler

[11:27:59] <temporal_> use a List<handler>

[11:28:02] <temporal_> and do roudn robin ?

[11:29:10] <temporal_> I mean now we have KafkaReadStream == Consumer

[11:29:15] <temporal_> but we could have 1-many ?

[11:29:16] <ppatiern> it could be another approach yes … but in some terms you are breaking the Kafka consumer pattern

[11:29:18] <ppatiern> I mean …

[11:29:21] <temporal_> I don't know if it makes ssense

[11:29:25] <temporal_> yes

[11:29:26] <ppatiern> let's consider a topic with 4 partitions ok ?

[11:29:28] <temporal_> it would complicate things

[11:29:41] <temporal_> ahh

[11:29:45] <ppatiern> now … I have only one consumer and it will have assigned all 4 partitions

[11:29:46] <temporal_> it would break ordering

[11:29:53] <ppatiern> but you want to use 6 clients

[11:29:55] <temporal_> and usually you use partitions to keep order

[11:30:01] <ppatiern> right

[11:30:35] <ppatiern> and the other thing is that ….

[11:30:42] <ppatiern> with your proposal … you can have 6 app consumers reading from 4 partitions

[11:30:48] <ppatiern> that is something “strange” to kafka

[11:30:58] <ppatiern> more competing consumer than the number of partitions

[11:31:28] <temporal_> yes we should not try to change kafka's model

[11:31:32] <temporal_> or hide it

[11:31:38] <temporal_> or even provide an alteranative to it

[11:31:41] <ppatiern> yes

[11:31:56] <temporal_> can you add a note in the doc about it ?

[11:32:05] <ppatiern> with your proposal we should avoid having more handlers than topic partitions

[11:32:47] <ppatiern> you mean … a note about having shared producer is possible and feasible but having shared consumer is not good ?

[11:32:49] <ppatiern> :-)

[11:34:26] <temporal_> a note in documentatino that explains the usage pattern with respect to multiple verticles

[11:34:54] <ppatiern> ok

[12:26:54] <temporal_> ppatiern if we would support shared kafka producer, what would identify the producer ?

[12:27:34] <temporal_> bootstrap.servers property ?

[13:41:54] <temporal_> ppatiern I'm adding auto-close in verticle for kafka

[14:51:54] <temporal_> ppatiern here we go https://github.com/vert-x3/vertx-kafka-client/pull/15

[15:13:29] <ppatiern> temporal_: merged !

[15:13:40] <temporal_> great

[15:13:54] <temporal_> I can try to contribute the shared stuff as well later

[15:14:07] <ppatiern> ok

[15:14:15] <temporal_> and about sharing : would bootstrap.servers used as key in the map ?

[15:14:33] <temporal_> what are the characteristics you use to share a producer ?

[15:20:20] <temporal_> allo ?

[15:21:54] <temporal_> ppatiern ?

[15:22:22] <ppatiern> sorry Julien I'm jumping between different stuff … in a call now :-)

[15:34:14] <temporal_> just looking for a simple answer :-)

[15:49:27] <ppatiern> temporal_ sorry here I'm now

[15:49:44] <temporal_> so I'm going to use for now a key

[15:49:48] <temporal_> that is the producer config

[15:49:48] <ppatiern> what do you mean using bootstrap.servers used as a key ?

[15:50:08] <temporal_> the idea is to maintain a static Map<key, producer>

[15:50:20] <temporal_> and use KafkaProducer.createShared()

[15:50:26] <temporal_> that returns the same instance

[15:50:43] <ppatiern> but I can have more producers with same bootstrap.servers

[15:51:00] <ppatiern> *ideally I could have

[15:51:01] <temporal_> in this case don't use createSharedd

[15:51:11] <temporal_> or people should name producers instead ?

[15:51:20] <temporal_> createSharedProducer(“foo”, config)

[15:51:22] <temporal_> ?

[15:51:30] <ppatiern> using “foo” as a key

[15:51:32] <ppatiern> yes better

[15:51:35] <temporal_> ok

[15:51:37] <temporal_> so logical naming

[15:51:43] <ppatiern> syre

[15:51:45] <ppatiern> *sure

[15:52:39] <temporal_> ok

[15:52:41] <temporal_> will use that

[17:00:00] <yunyul> I've a student who has used vertx for some time now for some side work and am interested in contributing to the project

[17:00:03] <yunyul> I'm*

[17:00:25] <yunyul> I've already submitted a PR (https://github.com/eclipse/vert.x/pull/1837), but I'm having some issues with signing the CLA form

[17:00:43] <yunyul> Most notably being that I actually can't seem to find it

[17:01:00] <yunyul> Probably just not looking hard enough, but I actually don't see a sign button on this page

[17:01:01] <yunyul> https://www.eclipse.org/legal/CLA.php

[17:04:33] <yunyul> Just kidding, there was a sign button on the sidebar

[17:06:44] *** ChanServ sets mode: +o temporalfox

[17:07:36] <yunyul> Alright, so I've made a pull request (https://github.com/eclipse/vert.x/pull/1837) and signed off the ECA. What's the preferred way to get the checks to rerun?

[17:07:49] <yunyul> Should I modify my already pushed commit and force push on top

[17:08:03] <yunyul> Or close and reopen a pull request, etc

[18:19:56] <temporalfox> you should open a new one yunyul

[23:30:27] <VertxBeginner> Hello, can anyone show me the way to map objects (from db and request) to normal java POJOs but not entities with just string fields?