Skip to content

Commit 8971b08

Browse files
committed
Have PhysicalBuffer just speak pages
1 parent 7cf73e7 commit 8971b08

File tree

5 files changed

+148
-137
lines changed

5 files changed

+148
-137
lines changed

README.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -93,6 +93,7 @@ make test
9393

9494
- VirtIO improvements:
9595
- Abstract and improve request/response queue from rng
96+
- Ensure we zero out descriptors after using them so we don't accidentally reuse the buffer
9697
- Allocate buffers on the fly instead of using static buffers, or have client provide buffer
9798
- Reuse them if we are reusing a descriptor, or free them.
9899
- If the VirtIO device always owns the memory it uses, it is easier to reason about alloc/free, ensure buffers are the right size, and ensure provided buffers are physically contiguous
@@ -104,8 +105,10 @@ make test
104105
- Ensure we don't accidentally reuse descriptors while we are waiting for a response from the device. Don't automatically just wrap around! This is what might require a mutex rather than just atomic integers?
105106
- I think there is a race condition with the interrupts with the current non-locking mechanism. Ensure that if there are concurrent writes while an interrupt, then an interrupt won't miss a read (e.g. there will at least be a followup interrupt)
106107
- Remember features we negotiate, and ensure we are accounting for the different features in the logic (especially around notifications)
107-
- Ensure `PhysicalBuffer` allocation and deallocation is safe, and that page and size math is correct (particularly in `drop()`). Make it foolproof.
108-
- Perhaps introduce page <-> address translation layer in a single spot.
108+
- Physical memory allocation
109+
- Ensure `PhysicalBuffer` allocation and deallocation is safe, and that page and size math is correct (particularly in `drop()`). Make it foolproof.
110+
- Be explicit about it allocating pages. Don't pretend it is just addresses. Perhaps put page <-> address translation layer in a single spot.
111+
- In fact, maybe `LockedPhysicalMemoryAllocator` should _not_ implement `Allocator`. If we want a malloc-like behavior on top of contiguous physical memory, we should explicitly use e.g. `LockedHeap` on top of some allocated physical pages (basically an arena).
109112
- Does `PhysicalBuffer::allocate` need an alignment argument? Does anything ever need more than page alignment?
110113
- Create `sync` module that has synchronization primitives
111114
- Wrapper around `spin::Mutex` that also disables interrupts, similar to Linux's `spin_lock_irqsave` (`x86_64::interrupts::without_interrupts` is handy here). Might need our own custom `MutexGuard` wrapper that handles re-enabling interrupts on `drop()`

kernel/src/memory.rs

Lines changed: 49 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
use core::alloc::{AllocError, Allocator, Layout, LayoutError};
1+
use core::alloc::{AllocError, Allocator, Layout};
22
use core::ptr;
33
use core::ptr::NonNull;
44

@@ -252,73 +252,74 @@ unsafe impl<S: PageSize> FrameAllocator<S> for LockedPhysicalMemoryAllocator<'_>
252252
}
253253
}
254254

255-
/// Error type used in `allocate_zeroed_buffer`.
256-
#[derive(Debug, Clone)]
257-
pub(crate) enum AllocZeroedBufferError {
258-
LayoutError(LayoutError),
259-
AllocError(AllocError),
260-
}
261-
262255
/// Physically contiguous buffer of memory. Allocates by page, so it can
263256
/// allocate more memory than requested. Useful for e.g. Direct Memory Access
264257
/// (DMA) like with VirtIO buffers.
258+
///
259+
/// NOTE: This type implements `Drop` and will free the allocated memory when
260+
/// it goes out of scope.
265261
#[derive(Debug)]
266262
pub(crate) struct PhysicalBuffer {
267-
ptr: NonNull<[u8]>,
263+
start_page: usize,
264+
num_pages: usize,
268265
}
269266

270267
impl PhysicalBuffer {
271-
pub(crate) fn allocate(
272-
len_bytes: usize,
273-
alignment: usize,
274-
) -> Result<Self, AllocZeroedBufferError> {
275-
let layout = Layout::from_size_align(len_bytes, alignment)
276-
.map_err(AllocZeroedBufferError::LayoutError)?;
277-
let ptr = KERNEL_PHYSICAL_ALLOCATOR
278-
.allocate_zeroed(layout)
279-
.map_err(AllocZeroedBufferError::AllocError)?;
280-
Ok(Self { ptr })
268+
// Don't need to expose this b/c allocate_zeroed is safer.
269+
fn allocate(min_bytes: usize) -> Result<Self, AllocError> {
270+
let num_pages = min_bytes.div_ceil(PhysicalMemoryAllocator::PAGE_SIZE);
271+
let start_page = KERNEL_PHYSICAL_ALLOCATOR.with_lock(|allocator| {
272+
allocator
273+
.allocator
274+
.allocate_contiguous(num_pages)
275+
.ok_or(AllocError)
276+
})?;
277+
Ok(Self {
278+
start_page,
279+
num_pages,
280+
})
281281
}
282282

283-
pub(crate) fn allocate_value<T>(val: T) -> Result<Self, AllocZeroedBufferError> {
284-
let layout = Layout::new::<T>();
285-
let ptr = KERNEL_PHYSICAL_ALLOCATOR
286-
.allocate_zeroed(layout)
287-
.map_err(AllocZeroedBufferError::AllocError)?;
283+
pub(crate) fn allocate_zeroed(min_bytes: usize) -> Result<Self, AllocError> {
284+
let buffer = Self::allocate(min_bytes)?;
285+
let ptr = buffer.address().as_u64() as *mut u8;
288286
unsafe {
289-
ptr::write_volatile(ptr.as_ptr().cast::<T>(), val);
287+
ptr::write_bytes(ptr, 0, buffer.len_bytes());
290288
}
291-
Ok(Self { ptr })
289+
Ok(buffer)
292290
}
293291

294-
/// Consumes the buffer and returns the underlying physical address and
295-
/// length in bytes. NOTE: It is up to the caller to free this memory,
296-
/// ideally by constructing a new buffer with
297-
/// `PhysicalBuffer::from_raw_parts` and letting that `Drop`.
298-
pub(crate) fn into_raw_parts(self) -> (PhysAddr, usize) {
299-
let buf = core::mem::ManuallyDrop::new(self);
300-
let addr = PhysAddr::new(buf.ptr.addr().get() as u64);
301-
let len_bytes = buf.ptr.len();
302-
(addr, len_bytes)
292+
pub(crate) fn address(&self) -> PhysAddr {
293+
PhysAddr::new(self.start_page as u64 * PhysicalMemoryAllocator::PAGE_SIZE as u64)
303294
}
304295

305-
pub(crate) unsafe fn from_raw_parts(addr: PhysAddr, len_bytes: usize) -> Self {
306-
let ptr = unsafe { nonnull_ptr_slice_from_addr_len(addr.as_u64() as usize, len_bytes) };
307-
Self { ptr }
296+
pub(crate) fn len_bytes(&self) -> usize {
297+
self.num_pages * PhysicalMemoryAllocator::PAGE_SIZE
298+
}
299+
300+
pub(crate) unsafe fn write_offset<T>(&mut self, offset: usize, val: T) {
301+
let buffer_len = self.len_bytes();
302+
assert!(
303+
offset < self.len_bytes(),
304+
"tried to write value at offset {offset} but buffer only has {buffer_len} bytes"
305+
);
306+
let ptr = (self.address().as_u64() + offset as u64) as *mut T;
307+
ptr::write_volatile(ptr, val);
308+
}
309+
310+
// TODO: Don't allow leaking. We are only doing this temporarily.
311+
pub(crate) fn leak(self) -> PhysAddr {
312+
let buf = core::mem::ManuallyDrop::new(self);
313+
buf.address()
308314
}
309315
}
310316

311317
impl Drop for PhysicalBuffer {
312318
fn drop(&mut self) {
313-
// TODO: Is this correct? DRY with the `deallocate`, and perhaps add
314-
// some types to ensure that we are converting to pages correctly. Also
315-
// ensure that we do indeed "own" the entire page we are de-allocating.
316-
let layout = unsafe {
317-
Layout::from_size_align_unchecked(self.ptr.len(), PhysicalMemoryAllocator::PAGE_SIZE)
318-
};
319-
let u8_ptr = self.ptr.cast::<u8>();
320-
unsafe {
321-
KERNEL_PHYSICAL_ALLOCATOR.deallocate(u8_ptr, layout);
322-
};
319+
KERNEL_PHYSICAL_ALLOCATOR.with_lock(|allocator| {
320+
allocator
321+
.allocator
322+
.free_contiguous(self.start_page, self.num_pages);
323+
});
323324
}
324325
}

kernel/src/virtio/block.rs

Lines changed: 37 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -58,13 +58,15 @@ pub(crate) fn virtio_block_get_id(device_index: usize) {
5858
.get_virtqueue(VirtQueueIndex(0))
5959
.unwrap();
6060

61-
let (data_addr, _) = PhysicalBuffer::allocate(BlockRequest::GET_ID_DATA_LEN as usize, 8)
61+
let data_addr = PhysicalBuffer::allocate_zeroed(BlockRequest::GET_ID_DATA_LEN as usize)
6262
.expect("failed to allocate GetID buffer")
63-
.into_raw_parts();
63+
// TODO: Don't leak here
64+
.leak();
6465

6566
let request = BlockRequest::GetID { data_addr };
6667
let raw_request = request.to_raw();
67-
virtq.add_buffer(&raw_request.to_descriptor_chain());
68+
let desc_chain = raw_request.to_descriptor_chain();
69+
virtq.add_buffer(&desc_chain);
6870
virtq.notify_device();
6971
}
7072

@@ -375,17 +377,26 @@ impl RawBlockRequest {
375377
// TODO: Ensure all of these descriptor components are dropped!
376378

377379
// Allocate header
378-
let header_buffer = PhysicalBuffer::allocate_value(RawBlockRequestHeader {
379-
request_type: self.request_type as u32,
380-
reserved: 0,
381-
sector: self.sector,
382-
})
383-
.expect("failed to allocate RawBlockRequest header");
384-
let header_desc = ChainedVirtQueueDescriptorElem::from_buffer(
385-
header_buffer,
386-
core::mem::size_of::<RawBlockRequestHeader>() as u32,
387-
VirtQueueDescriptorFlags::new().with_device_write(false),
388-
);
380+
let header_size = core::mem::size_of::<RawBlockRequestHeader>() as u32;
381+
let mut header_buffer = PhysicalBuffer::allocate_zeroed(header_size as usize)
382+
.expect("failed to allocate RawBlockRequest header");
383+
unsafe {
384+
header_buffer.write_offset(
385+
0,
386+
RawBlockRequestHeader {
387+
request_type: self.request_type as u32,
388+
reserved: 0,
389+
sector: self.sector,
390+
},
391+
);
392+
};
393+
394+
let header_desc = ChainedVirtQueueDescriptorElem {
395+
// TODO: Don't leak
396+
addr: header_buffer.leak(),
397+
len: header_size,
398+
flags: VirtQueueDescriptorFlags::new().with_device_write(false),
399+
};
389400

390401
// Buffer descriptor
391402
//
@@ -401,13 +412,19 @@ impl RawBlockRequest {
401412
// TODO: This is a waste of an entire page! We should allocate this
402413
// along with the header. We should probably also just use a slab
403414
// allocator or a sort of physically contiguous heap for this.
404-
let status_buffer = PhysicalBuffer::allocate_value(self.status as u8)
415+
let status_size = core::mem::size_of::<BlockRequestStatus>() as u32;
416+
let mut status_buffer = PhysicalBuffer::allocate_zeroed(status_size as usize)
405417
.expect("failed to allocate RawBlockRequest status");
406-
let status_desc = ChainedVirtQueueDescriptorElem::from_buffer(
407-
status_buffer,
408-
core::mem::size_of::<u8>() as u32,
409-
VirtQueueDescriptorFlags::new().with_device_write(true),
410-
);
418+
unsafe {
419+
status_buffer.write_offset(0, self.status);
420+
};
421+
422+
let status_desc = ChainedVirtQueueDescriptorElem {
423+
// TODO: Don't leak
424+
addr: status_buffer.leak(),
425+
len: status_size,
426+
flags: VirtQueueDescriptorFlags::new().with_device_write(true),
427+
};
411428

412429
[header_desc, buffer_desc, status_desc]
413430
}

0 commit comments

Comments
 (0)