Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Task.read interprets data incorrectly for short reads #528

Open
bkeryan opened this issue Mar 5, 2024 · 1 comment
Open

Task.read interprets data incorrectly for short reads #528

bkeryan opened this issue Mar 5, 2024 · 1 comment

Comments

@bkeryan
Copy link
Collaborator

bkeryan commented Mar 5, 2024

When DAQmxReadAnalogF64 with DAQmx_Val_GroupByChannel returns fewer samples than requested, it squashes the valid samples together at the beginning of the buffer. The way that nidaqmx.Task.read handles short reads doesn't take this into account, so it may return samples that have been overwritten.

Test case (from tests/component/test_task_read_ai.py, under development):

def test___analog_multi_channel_finite___read_too_many_sample___returns_valid_2d_channels_samples_truncated(
    ai_multi_channel_task: nidaqmx.Task,
) -> None:
    samples_to_acquire = 5
    ai_multi_channel_task.timing.cfg_samp_clk_timing(rate=1000.0, sample_mode=AcquisitionType.FINITE, samps_per_chan=samples_to_acquire)
    num_channels = ai_multi_channel_task.number_of_channels
    samples_to_read = 10

    data = ai_multi_channel_task.read(samples_to_read)

    expected = [
        [_get_voltage_offset_for_chan(chan_index) for _ in range(samples_to_acquire)]
        for chan_index in range(num_channels)
    ]
    _assert_equal_2d(data, expected, abs=VOLTAGE_EPSILON)

Result (lib and grpc fail the same way, because the underlying interpreters have consistent behavior here):

_ test___analog_multi_channel_finite___read_too_many_sample___returns_valid_2d_channels_samples_truncated[library_init_kwargs] _

ai_multi_channel_task = Task(name=_unnamedTask<0>)

    def test___analog_multi_channel_finite___read_too_many_sample___returns_valid_2d_channels_samples_truncated(
        ai_multi_channel_task: nidaqmx.Task,
    ) -> None:
        samples_to_acquire = 5
        ai_multi_channel_task.timing.cfg_samp_clk_timing(rate=1000.0, sample_mode=AcquisitionType.FINITE, samps_per_chan=samples_to_acquire)
        num_channels = ai_multi_channel_task.number_of_channels
        samples_to_read = 10

        data = ai_multi_channel_task.read(samples_to_read)

        expected = [
            [_get_voltage_offset_for_chan(chan_index) for _ in range(samples_to_acquire)]
            for chan_index in range(num_channels)
        ]
>       _assert_equal_2d(data, expected, abs=VOLTAGE_EPSILON)

tests\component\test_task_read_ai.py:144:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

data = [[1.000091555528428, 1.000396740623188, 1.000396740623188, 0.999786370433668, 0.999786370433668], [3.000274666585284, ...4, 2.999969481490524], [3.000274666585284, 3.000579851680044, 3.000579851680044, 2.999969481490524, 2.999969481490524]]
expected = [[1.0, 1.0, 1.0, 1.0, 1.0], [2.0, 2.0, 2.0, 2.0, 2.0], [3.0, 3.0, 3.0, 3.0, 3.0]], abs = 0.001

    def _assert_equal_2d(data: List[List[float]], expected: List[List[float]], abs: float) -> None:
        # pytest.approx() does not support nested data structures.
        assert len(data) == len(expected)
        for i in range(len(data)):
>           assert data[i] == pytest.approx(expected[i], abs=abs)
E           assert [3.000274666585284, 3.000579851680044, 3.000579851680044, 2.999969481490524, 2.999969481490524] == approx([2.0 ± 1.0e-03, 2.0 ± 1.0e-03, 2.0 ± 1.0e-03, 2.0 ± 1.0e-03, 2.0 ± 1.0e-03])
E
E             comparison failed. Mismatched elements: 5 / 5:
E             Max absolute difference: 1.000579851680044
E             Max relative difference: 0.3334621643612693
E             Index | Obtained          | Expected
E             0     | 3.000274666585284 | 2.0 ± 1.0e-03
E             1     | 3.000579851680044 | 2.0 ± 1.0e-03
E             2     | 3.000579851680044 | 2.0 ± 1.0e-03
E             3     | 2.999969481490524 | 2.0 ± 1.0e-03
E             4     | 2.999969481490524 | 2.0 ± 1.0e-03

tests\component\test_task_read_ai.py:151: AssertionError

NI IO Trace shows:
image
image
image
image
image

The original array was
[1,1,1,1,1,0,0,0,0,0,2,2,2,2,2,0,0,0,0,0,3,3,3,3,3,0,0,0,0,0]
then it was squashed to
[1,1,1,1,1,2,2,2,2,2,3,3,3,3,3,x,x,x,x,x,x,x,x,x,x,x,x,x,x,x]
When this happened, the old 2s were overwritten with 3s.

This test case passes if you increase the number of samples to acquire because a larger read buffer prevents the old/new sample positions from overlapping.

@bkeryan
Copy link
Collaborator Author

bkeryan commented Mar 5, 2024

FYI, DAQmx_Val_GroupByScanNumber does not have this data squashing/shifting behavior.

With DAQmx_Val_GroupByScanNumber, the original array would be:
[1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
Truncating the number of samples would not move any of the samples.

NumPy can use this data format without transposing the array indices if you specify Fortran-contiguous order instead of C-contiguous order, but nidaqmx-python doesn't currently support this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant