Be more careful with locking db.db_mtx#17418
Be more careful with locking db.db_mtx#17418asomers wants to merge 17 commits intoopenzfs:masterfrom
Conversation
alek-p
left a comment
There was a problem hiding this comment.
I've already reviewed this internally, and as the PR description states, we've had a good experience running with this patch for the last couple of months
|
As I see, in most of cases (I've spotted only one different) when you are taking |
|
FWIW, as we're discussing here, I even think - after all the staring at the code - that the locking itself is actually fine, it seems to be a result of optimizations exactly because things don't need to be overlocked if it's guaranteed to be OK via other logical dependencies. I think I have actually nailed where the problem is, but @asomers says he can't try it :) |
That's because of this comment from @pcd1193182: "So the subtlety here is that the value of the db.db_data and db_buf fields are, I believe, still protected by the db_mtx plus the db_holds refcount. The contents of the buffers are protected by the db_rwlock." So many places need both |
|
I'm sorry, I mixed it up. This is definitely needed and then there's a bug with dbuf resize. Two different things. |
|
@asomers Are you still awaiting reviewers on this? I've been running with the changes from this PR without any issues for a while now. It would be nice to get in all the "prevents corruption" PRs before 2.4.0. |
|
Does this apply to 2.2.8 also? |
amotin
left a comment
There was a problem hiding this comment.
OK. I went through all this, and I believe most of locking is not neded -- se below. Only few I've left uncommented.
|
Though I see your comments, @amotin , I still struggle to understand the right thing to do, generally, because the locking requirements aren't well documented, nor are they enforced either by the compiler or at runtime. Here are the different descriptions I've seen: From dbuf.h: And here's what @pcd1193182 said in #17118 👍 And later But I don't see any list of what the various states are, nor how to tell which state a dbuf is in. @amotin added the following in that same discussion thread: And @amotin added some more detail in this PR:
I can't confidently make any changes here without a complete and accurate description of the locking comments. What I need are:
@amotin can you please help with that? At least with the first part? |
|
@asomers Let me rephrase the key points:
|
|
My humble opinion. I think it is reasonable request to:
It is good to have optimizations, but it is not healthy that there is very limit knowledge of the locking scheme in small group of people with poor documentation and inability to examine the code for correctness. |
|
@asomers Despite my comments on many of the changes here, IIRC there were some that could be useful. Do you plan to clean this up, document, etc, or I'll have to take it over? |
Yes. My approach is to create some assertion functions which check that either db_data is locked, or is in a state where it doesn't need to be. The WIP is here, but it isn't ready for review yet. Probably next week. https://github.com/asomers/zfs/tree/db_data_elide . |
|
@amotin I've eliminated the lock acquisitions as you requested. Please review. Note that while I've run the ZFS test suite with this round of changes, I don't know whether they suffice to solve the original corruption bug. The only way to know that is to run the code in production. But I'd like your review before I try that, because it takes quite a bit of time and effort to get sufficient production time. Not to mention the risk of corrupting customer data again. |
|
@asomers if you can rebase this on the latest commits in that master branch that should resolve most of the CI build failures. While you're at if please go ahead and squash the commits. |
|
There are suddenly a lot of "Wrong value for OS variable!" failures. I think that |
I am not sure what it means, but when you last rebased? |
Not since September. I've avoided doing that, since it can make the review confusing. But I'll do it now. |
Signed-off-by: Alan Somers <asomers@gmail.com>
Lock db_mtx in some places that access db->db_data. But in some places, add assertions that the dbuf is in a state where it will not be copied, rather than locking it. Lock db_rwlock in some places that access db->db.db_data's contents. But in some places, add assertions that should guarantee the buffer is being accessed by one thread only, rather than locking it. Closes openzfs#16626 Sponsored by: ConnectWise Signed-off-by: Alan Somers <asomers@gmail.com>
1) It wasn't actually checking the rwlock for indirect blocks 2) Per @amotin, "DMU_BONUS_BLKID and DMU_SPILL_BLKID can only exist at level 0", it was redundantly checking the blkid.
The assertion was no longer true after removing the check for db_dirtycnt in the previous commit.
Either this function needs to acquire db_rwlock, or we need some guarantee that no other thread can modify db_data while db_dirtycnt == 0. From what @amotin said, it sounds like there is no guarantee.
the meta dnode may have bonus or spill blocks, but we don't need to lock db_data for those.
Delete unintended change
According to @amotin that was always the intention. But it wasn't documented, and in practice wasn't always done. Also, don't lock db_rwlock during dbuf_verify. Since db_dirtycnt == 0, we don't need to.
These weren't necessary originally, but after rebasing they are.
| if (dr->dr_dnode->dn_phys->dn_nlevels != 1) { | ||
| parent_db = dr->dr_parent->dr_dbuf; | ||
| assert_db_data_addr_locked(parent_db); | ||
| rw_enter(&parent_db->db_rwlock, RW_READER); |
There was a problem hiding this comment.
Couple chunks below in dbuf_write_ready() you take db_rwlock on the dnode buffer. Though both cases are reads in sync context, and I would not expect them to race.
module/zfs/dnode.c
Outdated
| mutex_enter(&db->db_mtx); | ||
| if (db->db_level != 1 || db->db_blkid >= end_blkid) { | ||
| mutex_exit(&db->db_mtx); |
There was a problem hiding this comment.
I am not sure why may we need locking here. level and blkid should be constants, I think.
There was a problem hiding this comment.
db_state and db_dirtcnt certainly need to be protected by db_mtx. I could move the mutex_enter down until after the db_level check if you insist, though.
There was a problem hiding this comment.
I haven't looked on a bigger picture here, but I'd say yeah, between later and never.
amotin
left a comment
There was a problem hiding this comment.
Couple small nits, but please review earlier comments still not marked resolved.
| } | ||
|
|
||
| assert_db_data_addr_locked(parent_db); | ||
| rw_enter(&parent_db->db_rwlock, RW_WRITER); |
There was a problem hiding this comment.
I wonder if dn_maxblkid update below we could move before the lock acquisition (or after the release?) to not think about the lock ordering? They seem unrelated.
| ASSERT(list_head(&db->db_dirty_records) == dr); | ||
| list_remove_head(&db->db_dirty_records); | ||
| ASSERT(list_is_empty(&db->db_dirty_records)); | ||
| ASSERT(MUTEX_HELD(&db->db_mtx)); |
There was a problem hiding this comment.
We got this lock just 6 lines above and done nothing to it.
Lock db->db_mtx in some places that access db->db_data. But don't lock it in free_children, even though it does access db->db_data, because that leads to a recurse-on-non-recursive panic.
Lock db->db_rwlock in some places that access db->db.db_data's contents.
Closes #16626
Sponsored by: ConnectWise
Motivation and Context
Fixes occasional in-memory corruption which is usually manifested as a panic with a message like "blkptr XXX has invalid XXX" or "blkptr XXX has no valid DVAs". I suspect that some on-disk corruption bugs have been caused by this same root cause, too.
Description
Always lock
dmu_buf_impl_t.db_mtxin places that access the value ofdmu_buf_impl_t.db->db_data. And always lockdmu_buf_impl_t.db_rwlockin places that access the contents ofdmu_buf_impl_t.db->db_rwlock.Note that
free_childrenstill violates these rules. It can't easily be fixed without causing other problems. A proper fix is left for the future.How Has This Been Tested?
I cannot reproduce the bug on command, so I had to rely on statistics to validate the patch.
Types of changes
Checklist:
Signed-off-by.