Skip to content

multiple runs of the wasm build cause memory leak #959

@themighty1

Description

@themighty1

Problem

wasm-bindgen has a limitation - it does not call destructors of thread-local objects. This causes the wasm build of tlsn to panic after a few runs.

Details

Acc.to this PR's wasm-bindgen/wasm-bindgen#2769 comment, wasm-bindgen does not have the ability to call destructors of TLS objects:

"This doesn't run any destructors or dispose of TLS storage, which seems like a significantly more difficult problem"

The tracing crate which we rely on uses thread locals heavily. Due to the TLS object's destructors not being called, sharded_slab (a tracing dependency) will panic with

Thread count overflowed the configured max count. Thread index = 128, max threads = 128.
in https://github.com/hawkw/sharded-slab/blob/e540cdb7daafd6a6e9d17408d3de017029a6637f/src/shard.rs#L297

Full stacktrace:

``` $__rustc[a5b89e8e8011757d]::__rust_start_panic @ harness_executor_bg.wasm:0x61c4a4 $__rustc[a5b89e8e8011757d]::rust_panic @ harness_executor_bg.wasm:0x604700 $std::panicking::rust_panic_with_hook::hd0da58a2a1a3ca63 @ harness_executor_bg.wasm:0x4ee5df $std::panicking::begin_panic_handler::{{closure}}::h2e2703491d4087bb @ harness_executor_bg.wasm:0x584d5e $std::sys::backtrace::__rust_end_short_backtrace::h7b5d1a1a58dbd50f @ harness_executor_bg.wasm:0x61bfc6 $__rustc[a5b89e8e8011757d]::rust_begin_unwind @ harness_executor_bg.wasm:0x601f50 $core::panicking::panic_fmt::h2899aa4559e5a097 @ harness_executor_bg.wasm:0x601fd9 $sharded_slab::pool::Pool::create::hf61528a7b341bc94 @ harness_executor_bg.wasm:0x445eb1 $sharded_slab::pool::Pool::create_with::hf3cef710a37063ea @ harness_executor_bg.wasm:0x5596c8 $::new_span::h3f64b167989ea65c @ harness_executor_bg.wasm:0x570a49 $ as tracing_core::subscriber::Subscriber>::new_span::h8b1f3376335aa13d @ harness_executor_bg.wasm:0x5e7dad $tracing::span::Span::new::h67155ec344646d77 @ harness_executor_bg.wasm:0x336cd0 $mpz_garble::protocol::semihonest::garbler::generate::{{closure}}::h95317ca98c067ec0 @ harness_executor_bg.wasm:0x1528d8 $pollster::block_on::h796d83f90c3ebe52 @ harness_executor_bg.wasm:0x26c40e $core::ops::function::FnOnce::call_once{{vtable.shim}}::h9348d0e2451a4b41 @ harness_executor_bg.wasm:0x532f37 $mpz_common::context::mt::worker::Worker::run::hcd01ad18ac006c22 @ harness_executor_bg.wasm:0x38f799 $core::ops::function::FnOnce::call_once{{vtable.shim}}::h47f320fcc926d51d @ harness_executor_bg.wasm:0x584090 $core::ops::function::FnOnce::call_once{{vtable.shim}}::h0e49fee3defa58ee @ harness_executor_bg.wasm:0x58c880 $web_spawn_start_worker @ harness_executor_bg.wasm:0x3ed9aa web_spawn_start_worker @ harness_executor.js:276 ```

On my machine this problem manifests after 8-12 successive runs of the wasm prover in the browser.

Possible solutions

  • Fix this problem upstream.
  • Reload the tracing subscriber after each run

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions