{{Quickfixn}} (no subject)

Antonio Meireles - GMail antonio.meireles at gmail.com
Fri Aug 5 11:33:45 PDT 2022


Forgot to mention, in my scenario both sides use NTP and the times we saw
is very close (sub second differences).

Regards,

Antonio Meireles

Em sex., 5 de ago. de 2022 15:28, Antonio Meireles - GMail <
antonio.meireles at gmail.com> escreveu:

> The Idea is not keeping rejecting It forever If for any cause the other
> part really needs to request It again.
>
> I am thinking in use the SendingTime from counterpart x my UtcNow.
>
> In my scenario If I just send 100 new messages in a burst and if they
> detect a gap in the first message, they will also detect gaps for the next
> 99 messages, and will generate all 100 resend requests almost at the same
> time.
>
> I also have a 100+ms RTT between the hosts. So It is very common they
> generate all requests before I receive the first one.
>
> I know we may have clock differences between the hosts, and to address a
> more generic scenario we can think another configuration to hold the amount
> of time we will keep rejecting similar gaps.
>
>
> Tks,
>
> Antonio Meireles (Guto)
>
>
> Em sex., 5 de ago. de 2022 14:43, Christoph John <christoph.john at macd.com>
> escreveu:
>
>> Hi
>>
>> Just curious: what do you want to store the time for? I could imagine
>> that in the "flooding" scenario the requests come in in short succession so
>> after one range has been satisfied the other side should send a new range
>> anyway.
>> Moreover, which time do you want to store and compare? A time local to
>> your environment ("the time it finished to be sent"). Now on the next
>> resend request which time do you compare it to? The sending time? That
>> might be off by even seconds compared to the time you are using. Even
>> earlier than the time you are using locally (except both sides are using
>> NTP or the like which my experience isn't always the case).
>>
>> Just my 2 cents.
>> Cheers
>> Chris
>>
>> Aug 5, 2022 19:19:45 Antonio Meireles - GMail <antonio.meireles at gmail.com
>> >:
>>
>> Hi,
>>
>> Quickfix has the SEND_REDUNDANT_RESENDREQUESTS=N configuration that can
>> be used to avoid sending ResendRequests of a gap that is already asked and
>> will be redundant.
>>
>> This can avoid ResendRequest loops, but sometimes the counterpart do not
>> use the same approach and may struggle the session with several Duplicated
>> and Redundant ResendRequests.
>>
>> If we process duplicated ResendRequests we can send all the gap several
>> times and this will only contribute to flood the connection.
>>
>> So, I am proposing a configuration DISCARD_DUPLICATED_RESEND_REQUESTS=Y
>> (default=N) that will make possible to configure a session to discard these
>> duplicated ResendRequests.
>>
>> To do this, the session should store the last ResendRequest boundaries
>> and the time it finished to be sent, and check new ResendRequests against
>> these stored values. If the new asked gap is inside the stored boundaries
>> and earlier than the stored time, the session should discard the
>> ResendRequest message.
>>
>> Do anyone have comments about this scenario or approach?
>>
>> Do you think it is usefull to follow implementing It and proposing a PR?
>>
>> BR,
>>
>> Antonio Meireles (Guto)
>>
>>
>> _______________________________________________
>> Quickfixn mailing list
>> Quickfixn at lists.quickfixn.com
>> http://lists.quickfixn.com/listinfo.cgi/quickfixn-quickfixn.com
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.quickfixn.com/pipermail/quickfixn-quickfixn.com/attachments/20220805/39b72b79/attachment.htm>


More information about the Quickfixn mailing list