You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We currently try to shorten the time between multiple experiment cycles. To do so, we inserted time markers at key positions to find possible hot spots. At our experiment at the APQ group at TU Darmstadt, we noticed that the transition to buffered takes around 150ms.
We placed time markers inside the experiment queue to get the timestamp when the devices are instructed to transition to buffered, and to figure out when the experiment queue receives the response from a device that it has transitioned.
For the devices, we placed timestamps at the beginning and end of the transition to buffered inside the DeviceTab (device_base_class.py) and inside the worker classes of each device. Ideally, we would expect that the most time is spent in the actual transition of a device, and all devices transition in parallel so that the slowest device defines the transition time.
The exact positions of the time markers can be found in the following commits:
Our test setup has the following devices inserted: NI-6713, NI-6534, NI-6259, PulseBlaser, NI-6713 (software emulated)
In the experiment queue, all these devices are in one start group and transition simultaneously.
Please see the attached image for the timestamps that we found.
In thread 280, we can see that the device base class sends the transition to buffered signal to the workers with much time in between. Ideally, we would expect this to happen within a few milliseconds. A possible issue could be the underlying qt execution queue.
Furthermore, there may be communication overhead from the queues as there is always a small delay (around 5ms) in the communication between threads. A typical queue should only take around 100us per message.
From these results, and looking at the code, it seems that many sub-functions are executed on the main thread, which results in poor parallel performance. Examples of this can be found in the StateQueue and tab base class where asynchronously started tasks are executed on the main thread later. One of these tasks is the transition to buffered that is called on the device tabs.
Our idea would be to implement a Device Thread that takes care of managing a task queue which is then forwarded accordingly to a UI thread or the worker thread. The state would have to be kept at the master so that we can efficiently access the front panel values for example. This would allow us not having to wait for the main thread when starting tasks such as the transition to buffered. However, this seems like a lot of work considering that most devices would require restructuring.
Do you have suggestions on how we should proceed with this issue?
The text was updated successfully, but these errors were encountered:
We currently try to shorten the time between multiple experiment cycles. To do so, we inserted time markers at key positions to find possible hot spots. At our experiment at the APQ group at TU Darmstadt, we noticed that the transition to buffered takes around 150ms.
We placed time markers inside the experiment queue to get the timestamp when the devices are instructed to transition to buffered, and to figure out when the experiment queue receives the response from a device that it has transitioned.
For the devices, we placed timestamps at the beginning and end of the transition to buffered inside the DeviceTab (device_base_class.py) and inside the worker classes of each device. Ideally, we would expect that the most time is spent in the actual transition of a device, and all devices transition in parallel so that the slowest device defines the transition time.
The exact positions of the time markers can be found in the following commits:
Our test setup has the following devices inserted: NI-6713, NI-6534, NI-6259, PulseBlaser, NI-6713 (software emulated)
In the experiment queue, all these devices are in one start group and transition simultaneously.
Please see the attached image for the timestamps that we found.
In thread 280, we can see that the device base class sends the transition to buffered signal to the workers with much time in between. Ideally, we would expect this to happen within a few milliseconds. A possible issue could be the underlying qt execution queue.
Furthermore, there may be communication overhead from the queues as there is always a small delay (around 5ms) in the communication between threads. A typical queue should only take around 100us per message.
From these results, and looking at the code, it seems that many sub-functions are executed on the main thread, which results in poor parallel performance. Examples of this can be found in the StateQueue and tab base class where asynchronously started tasks are executed on the main thread later. One of these tasks is the transition to buffered that is called on the device tabs.
Our idea would be to implement a Device Thread that takes care of managing a task queue which is then forwarded accordingly to a UI thread or the worker thread. The state would have to be kept at the master so that we can efficiently access the front panel values for example. This would allow us not having to wait for the main thread when starting tasks such as the transition to buffered. However, this seems like a lot of work considering that most devices would require restructuring.
Do you have suggestions on how we should proceed with this issue?
The text was updated successfully, but these errors were encountered: