Unmangling/caching of multiple links per RU#1840
Unmangling/caching of multiple links per RU#1840sawenzel merged 6 commits intoAliceO2Group:devAliceO2Group/AliceO2:devfrom
Conversation
db65892 to
43afedd
Compare
There was a problem hiding this comment.
@shahor02 Just another question. How many triggers do we actually cache ? One, or all in a rawdata chunk ?
| rdhNew->heartbeatOrbit != rdhOld->heartbeatOrbit || | ||
| rdhNew->heartbeatBC != rdhOld->heartbeatBC || | ||
| rdhNew->triggerType != rdhOld->triggerType) { | ||
| !(rdhNew->triggerType & rdhOld->triggerType)) { |
There was a problem hiding this comment.
@shahor02 Sorry, I am afraid I put this question at a wrong place. So I repeat it here: What if the "new trigger" and "old trigger" differ by just one bit. Naively, this should mean that these two triggers are still different. This was the case with the previous logics. But the new logics would consider such triggers as identical. Is it indeed OK ?
There was a problem hiding this comment.
Hi @iouribelikov , the trigger is actually composed from different trigger bits, e.g. the HB trigger flag may coincide with the physical trigger flag PhT. Right now I am not sure when the they will decouple: in the extra pages (RDH's) of the same physical trigger or at the next one, so I did it in the way that if they decouple after the 1st page, it is not interpreted as a new trigger. I assume once we understand all trigger states more complex check will be needed. Right not this check (especially in its old form) is not particularly useful since the read change of the trigger would necessarily change also the orbit/BC
There was a problem hiding this comment.
Thank you, @shahor02, for the clarification ! OK. Let's stay like that until the situation becomes more clear.
|
For the record, posting here answer to Yura's question: What I cache in the reader are the raw data pages corresponding to some number of triggers for each link and this is a forced measure which I would prefer to avoid, since it involves extra copying. The problem is that a priori I don't know how many links are used for every RU but I know that (i) each link writes to its own superpage, (ii) there could be at most 256 pages in the superpage, (iii) single trigger of single link contains at most 1 trigger and (iv) the raw date file is made by dumping sequentially the superpages for every link of the same trigger, then the same for the next trigger etc. So, if I keep reading until I will see at least N>256 triggers for the link with smallest number of pages, I am guaranteed to see all the links of all RUs of at least these N triggers. The N triggers to cache is set via setMinTriggersToCache(n) methods, by default I read 260 triggers, if requested N is < 257, it is automatically set to 257. |
* DPG: update PID qa - Remove integrated charge if split is required - Add TOF map
Every GBT link is writing its data to a separate CRU superpage -> multiple triggers of the same link per superpage. When multiple links are dumped into the same file, the trigger data is not contiguous.
Here we are caching data of each link in a separate buffer, so that decoding can run over the same trigger of different links.
The signature of getRUDecodingStatSW, getRUDecodingStatHW methods has changed, not instead of the reference, they return pointers on relevant RUDecodingStat (will be 0 if RU was not seen in the data).