Skip to content

Commit

Permalink
btrfs: open-code btrfs_copy_from_user()
Browse files Browse the repository at this point in the history
The function btrfs_copy_from_user() handles the folio dirtying for
buffered write. The original design is to allow that function to handle
multiple folios, but since commit "btrfs: make buffered write to copy one
page a time" there is no need to support multiple folios.

So here open-code btrfs_copy_from_user() to
copy_folio_from_iter_atomic() and flush_dcache_folio() calls.

The short-copy check and revert are still kept as-is.

Signed-off-by: Qu Wenruo <[email protected]>
  • Loading branch information
adam900710 committed Nov 7, 2024
1 parent 6927c3d commit 3fd6e78
Showing 1 changed file with 17 additions and 47 deletions.
64 changes: 17 additions & 47 deletions fs/btrfs/file.c
Original file line number Diff line number Diff line change
Expand Up @@ -37,52 +37,6 @@
#include "file.h"
#include "super.h"

/*
* Helper to fault in page and copy. This should go away and be replaced with
* calls into generic code.
*/
static noinline int btrfs_copy_from_user(loff_t pos, size_t write_bytes,
struct folio *folio, struct iov_iter *i)
{
size_t copied = 0;
size_t total_copied = 0;
int offset = offset_in_page(pos);

while (write_bytes > 0) {
size_t count = min_t(size_t, PAGE_SIZE - offset, write_bytes);
/*
* Copy data from userspace to the current page
*/
copied = copy_folio_from_iter_atomic(folio, offset, count, i);

/* Flush processor's dcache for this page */
flush_dcache_folio(folio);

/*
* if we get a partial write, we can end up with
* partially up to date page. These add
* a lot of complexity, so make sure they don't
* happen by forcing this copy to be retried.
*
* The rest of the btrfs_file_write code will fall
* back to page at a time copies after we return 0.
*/
if (unlikely(copied < count)) {
if (!folio_test_uptodate(folio)) {
iov_iter_revert(i, copied);
copied = 0;
}
if (!copied)
break;
}

write_bytes -= copied;
total_copied += copied;
offset += copied;
}
return total_copied;
}

/*
* Unlock folio after btrfs_file_write() is done with it.
*/
Expand Down Expand Up @@ -1268,7 +1222,23 @@ ssize_t btrfs_buffered_write(struct kiocb *iocb, struct iov_iter *i)
break;
}

copied = btrfs_copy_from_user(pos, write_bytes, folio, i);
copied = copy_folio_from_iter_atomic(folio,
offset_in_folio(folio, pos), write_bytes, i);
flush_dcache_folio(folio);

/*
* If we get a partial write, we can end up with partially
* uptodate page. Although if sector size < page size we can
* handle it, but if it's not sector aligned it can cause
* a lot of complexity, so make sure they don't happen by
* forcing retry this copy.
*/
if (unlikely(copied < write_bytes)) {
if (!folio_test_uptodate(folio)) {
iov_iter_revert(i, copied);
copied = 0;
}
}

num_sectors = BTRFS_BYTES_TO_BLKS(fs_info, reserve_bytes);
dirty_sectors = round_up(copied + sector_offset,
Expand Down

0 comments on commit 3fd6e78

Please sign in to comment.