-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Personal/nitinsingla/few fixes #165
base: main
Are you sure you want to change the base?
Personal/nitinsingla/few fixes #165
Conversation
… directory cache purge of the current directory inode
…e not changed. w/o this read only scenarios, we can have attr expire and then get_file_size() will return -1 and we will not do readahead.
…of whether it could or could not add the entry to directory cache. This is because if a file/dir is created inside a directory, our cache technically becomes out of sync with the server. Note that w emay fail to add to the directory cache because it may be full or some other error.
…) and send_readdir_or_readdirplus_response() to fix an inode leak Now we hold inode lookupcnt ref for all entries with a valid inode. This way, send_readdir_or_readdirplus_response() can call decref() for the inodes it doens't pass to fuse. Previously we were decrementing dircachecnt and there was a possibility of that becoming 0 while lookupcnt was already 0. Those inodes would be sitting in inode_map with lookupcnt=dircachecnt=0.
We rename all locks to the form <context>_lock_N. N is the order of the lock. A thread can only hold a higher order lock (greater N) then the highest order lock it's currently holding, i.e., a thread holding a lock *_lock_N cannot hold any lock from *_lock_0 to *_lock_N-1 (it can only hold *_lock_N+1 and higher order locks). Later I plan to add assertions to catch violations of the above rule. nfs_client::inode_map_lock -> nfs_client::inode_map_lock_0 nfs_inode::ilock -> nfs_inode::ilock_1 nfs_inode::readdircache_lock -> nfs_inode::readdircache_lock_2 nfs_client::jukebox_seeds_lock -> nfs_client::jukebox_seeds_lock_39 ra_state::ra_lock -> ra_state::ra_lock_40 rpc_task_helper::task_index_lock -> rpc_task_helper::task_index_lock_41 rpc_stats_az::stats_lock -> rpc_stats_az::stats_lock_42 bytes_chunk_cache::chunkmap_lock -> bytes_chunk_cache::chunkmap_lock_43 membuf::mb_lock -> membuf::mb_lock_44
…correctly as we were clearing the nfs_inode.
Added a TODO to fix it. Also fixed a log for readdir.
…ail to add the entry to dir cache.
Applications don't seem to handle it well.
… place Made directory_entry purely readonly so that we don't update it, as users of that are not expecting it to change. So, if we receive a LOOKUP response for a Type (2) entry which is present, we remove that entry and create a new one.
After that, hold inode lock if we have to update the inode.
Fix rastate.
…side sync_membufs() is senn to fail sometimes.
…side sync_membufs() is senn to fail sometimes.
… before starting the first write in sync_membufs()
Co-authored-by: Ubuntu <shubham@shubham808VM-westeurope.2vwp434kux0ehoz4y3jbj5wcwa.ax.internal.cloudapp.net>
This can be called when cache is being shutdown on file close. This will delete dirty/uncommitted membufs too.
Co-authored-by: Nagendra Tomar <[email protected]>
…e flush to complete. We cannot hold flush_lock while waiting for any WRITE RPCs to complete as write_iov_callback()->on_flush_complete() also grabs the flush_lock. This can cause a deadlock.
…the flush to complete We have a rule that anyone waiting for a write to complete must not hold the flush_lock as the write_iov_callback() also may hold the flush lock.
Co-authored-by: Nagendra Tomar <[email protected]>
else we can have a deadlock where write_iov_callback() first releases the membuf lock which make wait_for_ongoing_flush() believe that all flush are done and it can take the flush_lock safely, but write_iov_callback() calls on_flush_complete() later which grabs the flush_lock. This can cause a deadlock. The fix is to let the callback drain. Since wait_for_ongoing_flush() is a rarely used function, we make that check while write_iov_callback() can run w/o any additional check or complexity. Another way of solving this would be to release the membuf lock after on_flush_complete() runs, but then it would have added unncessary restrictions on what we can do insode on_flush_complete(). Co-authored-by: Nagendra Tomar <[email protected]>
Some of the fcsm asserts were incorrectly disabled, enabled them. Also added assert that we always write full bcs to the backend, else we have issues as dirty/flushing/committed flags are tracked at membuf granularity. Co-authored-by: Nagendra Tomar <[email protected]>
* Implement 2 phase truncate. This change gets rid of SCAN_ACTION_TRUNCATE and removes truncate support from bytes_chunk_cache::scan(). Now scan() cannot remove membufs which are in use or dirty. truncate() is a separate function now which specializes in truncating the cache and duly waits for locked bcs. It assumes that truncate will be called from contexts which can afford to wait and correctness is more important. * Enforce mb is locked and inuse inside trim(). * Undo the req to have membuf locked and inuse inside trim() It's hard to guarantee that, especially from ~bc_iovec(). * Increase max_retry to 200 to properly handle cases where we induce sleep in the write callback * Do not flush truncated membufs. --------- Co-authored-by: Nagendra Tomar <[email protected]>
Co-authored-by: Nagendra Tomar <[email protected]>
* file_cache.* initial * Audit of Nitin's unstable write PR Audited file_cache.h Audited file_cache.cpp --------- Co-authored-by: Nagendra Tomar <[email protected]>
fcsm.h fcsm.cpp nfs_inode.h
fcsm.h fcsm.cpp
nfs_inode.cpp
@nitin-deamon please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
0be2f2d
to
76f82d3
Compare
No description provided.