Billing 3-1: WAS(Re: [PATCH 2/4] deferred drop,
__parent workaround, reshape_fail , firstname.lastname@example.org ,
Mon Aug 23 13:31:11 CEST 2004
On Mon, 2004-08-23 at 08:04, sandr8 wrote:
> jamal wrote:
> so, maybe we are saying the same thing but in different words :)
> if we blindly look at layer 3 and unbill when a packet is dropped,
> then the retransmission is already unbilled :)
> it will be billed when it takes place, but the first transmission that
> underwent a drop has been unbilled and hence we are square.
> this without looking at layer 4.
> what i was thinking about was mimicking the conntracking at
> a device level, having per each device a singleton object that
> has the same buckets as the connection tracking. it could
> store a lot of interesting information that would augment queuing
> disciplines to better share the pain of drops and also to perform
> per-connection head drops instead of connection-unaware
This connection exists today in the form of marking. Let conmtracking
mark then use fw classifier at the qdisc. If i am not mistaken theres
something more powerful these days called conmarking.
Now if you did it like that then you dont need Haralds code and it will
be much easier to maintain (refer to my earlier email).
> this would improve fairness and shorten the time tcp sources
> need to get the feedback, in a better way than random early
> drop does.
Contracking is a horrible performance pig. I dont even turn it on. Dont
make it a requirement to turn it on.
> having this structure at a device level would be an answer
> for the issue of packets cloned to multiple interfaces, as we
> would be augmented to perform a separate accounting for
> each interface (which seems, afaik, reasonable... in most
> cases we would account on a single interface, and we also
> should likely get less hash collisions... no more than in the
> centralized conntrack).
Heres a thought:
Make it a tc action.
--> netfilter: contrack --> conmark
--> qdisc: classify via fw -> billing action bill
--------> attach table index + lock to skb
--------> deal with any unfairness on enqueue by using index
--------> and fine grained lock found in skb
on skb freeing make sure you delete the index and lock.
On cloning, what to do?
Note i do plan to have a contracking action at the qdisc level.
Let me know if that is useful to you then i can prioritize.
> furthermore, the per-bucket lock you suggested, that should
> be a good compromise, would also not "interfere" from one
> interface to the other one. well... maybe as soon as enqueues
> and dequeues on the same device stay serialized (thanks to
> dev->queue_lock) we should not need that further lock
> does it make sense?
I think it does. Make sure you double check the rules i posted earlier.
More information about the netfilter-devel