Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to write tests with VM::init? #118

Open
ayanko opened this issue Jun 11, 2020 · 8 comments
Open

How to write tests with VM::init? #118

ayanko opened this issue Jun 11, 2020 · 8 comments

Comments

@ayanko
Copy link

ayanko commented Jun 11, 2020

Hi there!

I have very trivial lib:

lib.rs

#[macro_use]
extern crate lazy_static;
use std::sync::Mutex;
...
#[cfg(test)]
lazy_static! {
    pub static ref TEST_MUTEX: Mutex<()> = Mutex::new(());
}
...

service.rs

Some module with dumb trival assertions:

#[test]
fn test_foo1() {
    let _guard = crate::TEST_MUTEX.lock().unwrap();
    VM::init();

    let mut array = Array::new();
    assert_eq!(array.length(), 0);
}

#[test]
fn test_foo2() {
    let _guard = crate::TEST_MUTEX.lock().unwrap();
    VM::init();

    let mut array = Array::new();
    assert_eq!(array.length(), 0);
}

#[test]
fn test_foo3() {
    let _guard = crate::TEST_MUTEX.lock().unwrap();
    VM::init();

    let mut array = Array::new();
    assert_eq!(array.length(), 0);
}

Run tests

cargo test
(signal: 11, SIGSEGV: invalid memory reference)

If tried different locks implementations, serial_test cargo, etc. Nothing works. Same crash (time to time)

The only one workaround that it REALLY works is to run all specs in SEQUENCE:

cargo test -- --test-threads 1

I wonder how internal rutie tests still works with their LOCK_FOR_TEST locker.
https://github.com/danielpclark/rutie/blob/master/src/class/integer.rs#L207

Did I do something wrong?
Thx in advance.

@danielpclark
Copy link
Owner

danielpclark commented Jun 11, 2020

There can be only one VM::init() running at any one time. You can look at this file and see a RwLock used for it https://github.com/danielpclark/rutie/blob/master/src/lib.rs#L61 which may be what you want. See VM::init used multiple times here using that RwLock https://github.com/danielpclark/rutie/blob/master/src/class/integer.rs

If your code comments run test code then simply put a hashtag before VM::init() to have them not conflict # VM::init()

@ayanko
Copy link
Author

ayanko commented Jun 11, 2020

There can be only one VM::init()

Per thread?

I still don't understand why your testsuite in rutie projects passes but my project fails.
Your source code has multiple Locks + VM::init.

@danielpclark
Copy link
Owner

danielpclark commented Jun 11, 2020

There are some key differences between your code and the code here:

#[cfg(test)]
lazy_static! {
    pub static ref LOCK_FOR_TEST: RwLock<i32> = RwLock::new(0);
}

// ...
        let _guard = LOCK_FOR_TEST.write().unwrap();
        VM::init();

You're using lock instead of write and you're using a Mutex instead of a RwLock.

@ayanko
Copy link
Author

ayanko commented Jun 11, 2020

I tried your solution before. The same.
Ok. I will double check it once more...

@ayanko
Copy link
Author

ayanko commented Jun 12, 2020

Ok. Issue is with using VM protect methods stuff and unwrap()

Here is STR.

Create brand new project

$ cargo init --lib poc
$ cd poc

Cargo.toml

[dependencies]
rutie = "0.7.0"
lazy_static = "1.4.0"

src/lib.rs

extern crate rutie;

#[macro_use]
extern crate lazy_static;

use rutie::{VM, Object, Fixnum, Array};

use std::sync::{RwLock};

#[cfg(test)]
lazy_static! {
    pub static ref LOCK_FOR_TEST: RwLock<i32> = RwLock::new(0);
}

#[test]
fn test_two_plus_two() {
   # We don't need lock here.
    assert_eq!(2 + 2, 4);
}

#[test]
fn test_array_length() {
    let _guard = LOCK_FOR_TEST.write().unwrap();
    VM::init();

    let array = Array::new();
    assert_eq!(array.length(), 0);
}

#[test]
fn test_array_protect_send() {
    let _guard = LOCK_FOR_TEST.write().unwrap();
    VM::init();

    let mut array = Array::new();
    array.push(Fixnum::new(101).to_any_object());
    let first = array.protect_send("first", &[]).unwrap();
    assert_eq!(first, Fixnum::new(101).to_any_object());
}

Run in sequence

$ cargo test  -- --test-threads 1

It never fails.

Run in parallel

Run several times and sometimes it fails. Looks like it depends on order which thread obtains lock first (test test_array_protect_send or test test_array_length)

$ cargo test
    Finished test [unoptimized + debuginfo] target(s) in 0.01s
     Running target/debug/deps/poc-a9e080e9256b5366

running 3 tests
test test_two_plus_two ... ok
test test_array_protect_send ... ok
test test_array_length ... ok

test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
$ cargo test
    Finished test [unoptimized + debuginfo] target(s) in 0.01s
     Running target/debug/deps/poc-a9e080e9256b5366

running 3 tests
test test_two_plus_two ... ok
test test_array_length ... ok
test test_array_protect_send ... FAILED

failures:

---- test_array_protect_send stdout ----
thread 'test_array_protect_send' panicked at 'called `Result::unwrap()` on an `Err` value: #<SystemStackError: stack level too deep>', src/lib.rs:36:17
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    test_array_protect_send

test result: FAILED. 2 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out

With RUST_BACKTRACE=1

running 3 tests
test test_two_plus_two ... ok
test test_array_length ... ok
test test_array_protect_send ... FAILED

failures:

---- test_array_protect_send stdout ----
thread 'test_array_protect_send' panicked at 'called `Result::unwrap()` on an `Err` value: #<SystemStackError: stack level too deep>', src/lib.rs:36:17
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /Users/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/libunwind.rs:86
   1: backtrace::backtrace::trace_unsynchronized
             at /Users/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print_fmt
             at src/libstd/sys_common/backtrace.rs:78
   3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
             at src/libstd/sys_common/backtrace.rs:59
   4: core::fmt::write
             at src/libcore/fmt/mod.rs:1069
   5: std::io::Write::write_fmt
             at /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4/src/libstd/io/mod.rs:1504
   6: std::io::impls::<impl std::io::Write for alloc::boxed::Box<W>>::write_fmt
             at src/libstd/io/impls.rs:156
   7: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:62
   8: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:49
   9: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:198
  10: std::panicking::default_hook
             at src/libstd/panicking.rs:215
  11: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:511
  12: rust_begin_unwind
             at src/libstd/panicking.rs:419
  13: core::panicking::panic_fmt
             at src/libcore/panicking.rs:111
  14: core::option::expect_none_failed
             at src/libcore/option.rs:1268
  15: core::result::Result<T,E>::unwrap
             at /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4/src/libcore/result.rs:1005
  16: availability_engine::test_array_protect_send
             at src/lib.rs:36
  17: availability_engine::test_array_protect_send::{{closure}}
             at src/lib.rs:30
  18: core::ops::function::FnOnce::call_once
             at /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4/src/libcore/ops/function.rs:232
  19: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
             at /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4/src/liballoc/boxed.rs:1008
  20: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4/src/libstd/panic.rs:318
  21: std::panicking::try::do_call
             at /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4/src/libstd/panicking.rs:331
  22: std::panicking::try
             at /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4/src/libstd/panicking.rs:274
  23: std::panic::catch_unwind
             at /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4/src/libstd/panic.rs:394
  24: test::run_test_in_process
             at src/libtest/lib.rs:541
  25: test::run_test::run_test_inner::{{closure}}
             at src/libtest/lib.rs:450
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.


failures:
    test_array_protect_send

test result: FAILED. 2 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out

Why SystemStackError: stack level too deep ?

@ayanko
Copy link
Author

ayanko commented Jun 12, 2020

BTW What is philosophy of using unwrap with protect methods.

  1. Should I design lib that never use unwrap? (not sure if it possible design non-panic lib for ruby)
  2. Should I use original error (foo.protect_send(...).unwrap())
  3. Should I always remap it to VM::raise_ex? ( foo.protect_send(...).map_err(|e| VM::raise_ex(e)).unwrap)
  4. If lib uses rust threads should panic in one thread fails (poison) another thread as well?

@ayanko
Copy link
Author

ayanko commented Jun 12, 2020

I did few experiments with dumb program writing:

#[macro_use]
extern crate rutie;

#[macro_use]
extern crate lazy_static;

use rutie::{VM, Object, Fixnum, Array};

use std::sync::{RwLock};

lazy_static! {
    pub static ref LOCK_FOR_TEST: RwLock<i32> = RwLock::new(0);
}

fn test_array_length() {
    let _guard = LOCK_FOR_TEST.write().unwrap();
    eprintln!("test_array_length: START");

    let array = Array::new();
    let length = array.length();
    println!("length: {}", length);
}

fn test_array_size() {
    let _guard = LOCK_FOR_TEST.write().unwrap();
    eprintln!("test_array_size: START");

    let array = Array::new();
    let size = array.protect_send("size", &[]).unwrap().try_convert_to::<Fixnum>().unwrap().to_i32();
    println!("size: {}", size);
}

fn main() {
    main_work();
    //main_does_not_work_1();
    //main_does_not_work_2();
    //main_does_not_work_3();
}

fn main_work() {
    let child2 = std::thread::spawn(move || {
        VM::init();
        test_array_size();
    });
    let res2 = child2.join();
}

fn main_does_not_work_1() {
    VM::init();
    let child2 = std::thread::spawn(move || {
        test_array_size();
    });
    let res2 = child2.join();
}

fn main_does_not_work_2() {
    let child1 = std::thread::spawn(move || {
        VM::init();
        test_array_length();
    });
    let child2 = std::thread::spawn(move || {
        VM::init();
        test_array_size();
    });
    let res1 = child1.join();
    let res2 = child2.join();
}

fn main_does_not_work_3() {
    VM::init();
    let child1 = std::thread::spawn(move || {
        test_array_length();
    });
    let child2 = std::thread::spawn(move || {
        test_array_size();
    });
    let res1 = child1.join();
    let res2 = child2.join();
}

So it looks like we cannot work with Ruby VM in from different RUST threads at all.

@danielpclark Is it true?

@danielpclark
Copy link
Owner

On these questions:

1: Should I design lib that never use unwrap? (not sure if it possible design non-panic lib for ruby)

Finding a way to write code without using unwrap leads to better safer code. Sometimes and some places need unwrap, but often there are better ways to do a thing.

2: Should I use original error (foo.protect_send(...).unwrap())

I'm not quite sure I get the question.

3: Should I always remap it to VM::raise_ex? ( foo.protect_send(...).map_err(|e| VM::raise_ex(e)).unwrap)

Mapping is the most concise way to do it. If the main application is a Ruby app then the answer is yes.

4: If lib uses rust threads should panic in one thread fails (poison) another thread as well?

I don't know. What I do know is Ruby has it's own global thread state per Ruby thread which will within it's own thread throw exceptions that break the current Ruby thread's process for any error handling to proceed. If multiple Rust threads are running off of one instance of Ruby then one should consider how to properly handle a failing rust thread in such a way that the Ruby process will respond accordingly.

So it looks like we cannot work with Ruby VM in from different RUST threads at all.

I would expect we could… but I might recommend 1 master Rust thread in which it handles all communication between Rust and Ruby and that 1 master Rust thread can have many other threads acting like sub-processes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants