Skip to content

Conversation

@cjbradfield
Copy link
Contributor

This doesn't make Drop for Resource or the Client (or its dependencies) do I/O without blocking, but it does allow you to call Client::close() and have all of the tear down in that async context.

Furthermore, put a check in the Drop implementation for Resource so that it doesn't try to call close() if is_valid() is false. This prevents error spam if a user has already called async_close() on the resource rendering it empty.

@cjbradfield
Copy link
Contributor Author

Let me know if you'd like me to go the extra mile to make Drop on the Resource not block as well. We would need to be able to swap out the handle with an empty one so that we can then move the handle into the async context.

std::mem::swap(&mut conn_map, &mut self.connections);
for (_, c) in conn_map {
c._conn.close().await?;
if let Some(handler) = c._session.handler().upgrade() {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's put this logic inside Connection instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about I just do the Drop work for TreeMessageHandler, Connection, and Session? Otherwise, I can move this to OpenedConnectionInfo::close if you'd like.

Copy link
Contributor Author

@cjbradfield cjbradfield Aug 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, it looks like Connection::close doesn't actually do anything. It calls Worker::stop which just verifies the worker is there, but doesn't stop it or am I missing something? I'm also not disconnecting the Tree. I'll fix that.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I think we can complete an implementation for a close() for Connection/Session/Tree, and then also implement a Drop like you did (partially?).
  2. Worker::close depends on the configuration.
    • In the case of single-threaded - you are right. Would you like to fix that?
    • In the case of multi-threaded/async, this DOES stop everything completely, via ParallelWorker::stop, that calls MultiWorkerBackend::stop impl, which results in actual stopping procedure for async/multi-threaded

Copy link
Contributor Author

@cjbradfield cjbradfield Aug 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can do that. I probably won't get that done until this evening my time. I'll also add Drop for Connection so that it is closed.

edit: close for Tree may be more involved. It has a HandlerReference which prevents mutable access to the underlying handle (disconnect requires a mutable handle)

@afiffon
Copy link
Owner

afiffon commented Aug 7, 2025

@cjbradfield Check out #108, and rebase over the new master once this change is merged!

@cjbradfield cjbradfield changed the title Make client and resource close() do I/O work and leave no drop errs. Make client close() do I/O work and leave no drop errs. Aug 7, 2025
@cjbradfield
Copy link
Contributor Author

cjbradfield commented Aug 7, 2025

I'll rebase off of main once #108 merges provided you approve.

@cjbradfield cjbradfield requested a review from afiffon August 7, 2025 20:35
@cjbradfield cjbradfield force-pushed the resource-drop branch 2 times, most recently from 37f774e to 2c65b98 Compare August 7, 2025 20:55
@cjbradfield
Copy link
Contributor Author

I was able to clean this up and add close() implementations to everything but the Tree. If we want to add it there, we would have to add a lock of some sort. Given it isn't accessible from the client, I thought I'd hold off unless you say that's what you'd like.

@afiffon
Copy link
Owner

afiffon commented Aug 15, 2025

@cjbradfield Let's merge this PR -- please apply cargo fmt & cargo clippy on nightly channel, and make sure tests pass :)

We have to still have a dropping flag for tree. Otherwise, we
call drop() forever.
@cjbradfield
Copy link
Contributor Author

cjbradfield commented Aug 15, 2025

@cjbradfield Let's merge this PR -- please apply cargo fmt & cargo clippy on nightly channel, and make sure tests pass :)

I'm having trouble getting test_smb_iterating_long_directory to pass, even on the main branch. I get the following (on both branches):

ERROR smb::connection::transport::tcp] Got IO error: early eof -- Connection Error, notify NotConnected!
[ERROR smb::connection::worker::parallel::async_backend] Connection closed.

thread 'test_smb_iterating_long_directory' panicked at smb/tests/long_dir.rs:118:22:
called `Result::unwrap()` on an `Err` value: NotConnected
stack backtrace:
   0: __rustc::rust_begin_unwind
             at /rustc/29483883eed69d5fb4db01964cdf2af4d86e9cb2/library/std/src/panicking.rs:697:5
   1: core::panicking::panic_fmt
             at /rustc/29483883eed69d5fb4db01964cdf2af4d86e9cb2/library/core/src/panicking.rs:75:14
   2: core::result::unwrap_failed
             at /rustc/29483883eed69d5fb4db01964cdf2af4d86e9cb2/library/core/src/result.rs:1761:5
   3: core::result::Result<T,E>::unwrap
             at /home/chrisb/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/result.rs:1167:23
   4: long_dir::test_smb_iterating_long_directory::{{closure}}::{{closure}}::{{closure}}::{{closure}}
             at ./tests/long_dir.rs:118:22
   5: <futures_util::stream::stream::fold::Fold<St,Fut,T,F> as core::future::future::Future>::poll
             at /home/chrisb/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/futures-util-0.3.31/src/stream/stream/fold.rs:72:47
   6: long_dir::test_smb_iterating_long_directory::{{closure}}::{{closure}}
             at ./tests/long_dir.rs:129:10
   7: <core::pin::Pin<P> as core::future::future::Future>::poll
             at /home/chrisb/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/future.rs:124:9
   8: tokio::runtime::park::CachedParkThread::block_on::{{closure}}
             at /home/chrisb/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.47.1/src/runtime/park.rs:285:71
   9: tokio::task::coop::with_budget
             at /home/chrisb/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.47.1/src/task/coop/mod.rs:167:5
  10: tokio::task::coop::budget
             at /home/chrisb/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.47.1/src/task/coop/mod.rs:133:5
  11: tokio::runtime::park::CachedParkThread::block_on
             at /home/chrisb/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.47.1/src/runtime/park.rs:285:31
  12: tokio::runtime::context::blocking::BlockingRegionGuard::block_on
             at /home/chrisb/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.47.1/src/runtime/context/blocking.rs:66:14
  13: tokio::runtime::scheduler::multi_thread::MultiThread::block_on::{{closure}}
             at /home/chrisb/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.47.1/src/runtime/scheduler/multi_thread/mod.rs:87:22
  14: tokio::runtime::context::runtime::enter_runtime
             at /home/chrisb/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.47.1/src/runtime/context/runtime.rs:65:16
  15: tokio::runtime::scheduler::multi_thread::MultiThread::block_on
             at /home/chrisb/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.47.1/src/runtime/scheduler/multi_thread/mod.rs:86:9
  16: tokio::runtime::runtime::Runtime::block_on_inner
             at /home/chrisb/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.47.1/src/runtime/runtime.rs:358:50
  17: tokio::runtime::runtime::Runtime::block_on
             at /home/chrisb/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.47.1/src/runtime/runtime.rs:330:18
  18: long_dir::test_smb_iterating_long_directory::{{closure}}
             at ./tests/long_dir.rs:31:88
  19: core::ops::function::FnOnce::call_once
             at /home/chrisb/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
  20: serial_test::serial_code_lock::local_serial_core_with_return
             at /home/chrisb/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/serial_test-3.2.0/src/serial_code_lock.rs:30:5
  21: long_dir::test_smb_iterating_long_directory
             at ./tests/long_dir.rs:30:1
  22: long_dir::test_smb_iterating_long_directory::{{closure}}
             at ./tests/long_dir.rs:31:49
  23: core::ops::function::FnOnce::call_once
             at /home/chrisb/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
  24: core::ops::function::FnOnce::call_once
             at /rustc/29483883eed69d5fb4db01964cdf2af4d86e9cb2/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
[ERROR smb::tree] Failed to disconnect from tree: Message processing failed. Failed to send message to worker!
[ERROR smb::session] Failed to logoff: Message processing failed. Failed to send message to worker!

This even happens on 0.8.1. I guess I'll ignore it for now?

@afiffon
Copy link
Owner

afiffon commented Aug 16, 2025

Locally, this test can cause serious stress on the target server, to the point at which it seems to hang the connection.
You can ignore it, as long as the CI passes. It's on my backlog to make it less broken.

@afiffon afiffon merged commit f37fc10 into afiffon:main Aug 16, 2025
3 of 4 checks passed
@cjbradfield cjbradfield deleted the resource-drop branch August 16, 2025 17:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants