Skip to content

Commit 44638e6

Browse files
committed
Remove impl Allocator for memory mapper; not confident is it correct
1 parent 8971b08 commit 44638e6

File tree

3 files changed

+59
-68
lines changed

3 files changed

+59
-68
lines changed

README.md

Lines changed: 2 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -94,22 +94,13 @@ make test
9494
- VirtIO improvements:
9595
- Abstract and improve request/response queue from rng
9696
- Ensure we zero out descriptors after using them so we don't accidentally reuse the buffer
97-
- Allocate buffers on the fly instead of using static buffers, or have client provide buffer
98-
- Reuse them if we are reusing a descriptor, or free them.
99-
- If the VirtIO device always owns the memory it uses, it is easier to reason about alloc/free, ensure buffers are the right size, and ensure provided buffers are physically contiguous
100-
- Allocation and pointers: avoid manually calling `memory` alloc functions and passing around pointers
101-
- Have virtqueues "own" their buffers and handle alloc/dealloc. Devices that need to alloc for descriptors, like virtio-blk, can do the same.
102-
- We can use `Box<T, KERNEL_PHYSICAL_ALLOC>` to ensure we are using physical memory.
97+
- Improve memory "ownership" ergonomics, ensuring buffers are dropped after use. Maybe abstract into a common layer somehow?
98+
- Create a physically contiguous heap, or slab allocator, or something for virtio buffer requests so we don't waste an entire page per tiny allocation.
10399
- Locking: we need to lock writes (I think?), but we should be able to read from the queue without locking. This should be ergonomic. I don't necessarily want to bury a mutex deep in the code.
104100
- Investigate how Linux or other OS virtio drivers do locking
105101
- Ensure we don't accidentally reuse descriptors while we are waiting for a response from the device. Don't automatically just wrap around! This is what might require a mutex rather than just atomic integers?
106102
- I think there is a race condition with the interrupts with the current non-locking mechanism. Ensure that if there are concurrent writes while an interrupt, then an interrupt won't miss a read (e.g. there will at least be a followup interrupt)
107103
- Remember features we negotiate, and ensure we are accounting for the different features in the logic (especially around notifications)
108-
- Physical memory allocation
109-
- Ensure `PhysicalBuffer` allocation and deallocation is safe, and that page and size math is correct (particularly in `drop()`). Make it foolproof.
110-
- Be explicit about it allocating pages. Don't pretend it is just addresses. Perhaps put page <-> address translation layer in a single spot.
111-
- In fact, maybe `LockedPhysicalMemoryAllocator` should _not_ implement `Allocator`. If we want a malloc-like behavior on top of contiguous physical memory, we should explicitly use e.g. `LockedHeap` on top of some allocated physical pages (basically an arena).
112-
- Does `PhysicalBuffer::allocate` need an alignment argument? Does anything ever need more than page alignment?
113104
- Create `sync` module that has synchronization primitives
114105
- Wrapper around `spin::Mutex` that also disables interrupts, similar to Linux's `spin_lock_irqsave` (`x86_64::interrupts::without_interrupts` is handy here). Might need our own custom `MutexGuard` wrapper that handles re-enabling interrupts on `drop()`
115106
- In the future we should disable preemption when spin locks are taken

kernel/src/memory.rs

Lines changed: 57 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
1-
use core::alloc::{AllocError, Allocator, Layout};
1+
use core::alloc::AllocError;
22
use core::ptr;
3-
use core::ptr::NonNull;
43

54
use spin::Mutex;
65
use x86_64::structures::paging::mapper::{MapToError, Translate};
@@ -16,7 +15,7 @@ use bitmap_alloc::{bootstrap_allocator, BitmapAllocator, MemoryRegion};
1615
static KERNEL_MAPPER: KernelMapper = KernelMapper::new();
1716

1817
/// Physical memory frame allocator used by all kernel contexts.
19-
pub(crate) static KERNEL_PHYSICAL_ALLOCATOR: LockedPhysicalMemoryAllocator =
18+
static KERNEL_PHYSICAL_ALLOCATOR: LockedPhysicalMemoryAllocator =
2019
LockedPhysicalMemoryAllocator::new();
2120

2221
/// Initialize the `KERNEL_MAPPER` with the passed `physical_memory_offset`.
@@ -195,52 +194,61 @@ impl LockedPhysicalMemoryAllocator<'_> {
195194
}
196195
}
197196

198-
/// We implement the `Allocator` trait for `PhysicalMemoryAllocator`
199-
/// so that we can use it for custom allocations for physically contiguous
200-
/// memory.
201-
unsafe impl Allocator for LockedPhysicalMemoryAllocator<'_> {
202-
fn allocate(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> {
203-
let size = layout.size();
204-
let num_pages = size.div_ceil(PhysicalMemoryAllocator::PAGE_SIZE);
205-
206-
let alignment = layout.align() as u64;
207-
assert!(
208-
alignment <= PhysicalMemoryAllocator::PAGE_SIZE as u64,
209-
"alignment {alignment} must be <= page size {}. What the hell are we aligning???",
210-
PhysicalMemoryAllocator::PAGE_SIZE,
211-
);
212-
let start_page = self.with_lock(|allocator| {
213-
allocator
214-
.allocator
215-
.allocate_contiguous(num_pages)
216-
.ok_or(AllocError)
217-
})?;
218-
let start_address = start_page * PhysicalMemoryAllocator::PAGE_SIZE;
219-
let actual_size = num_pages * PhysicalMemoryAllocator::PAGE_SIZE;
220-
let ptr = unsafe { nonnull_ptr_slice_from_addr_len(start_address, actual_size) };
221-
222-
Ok(ptr)
223-
}
224-
225-
unsafe fn deallocate(&self, ptr: NonNull<u8>, layout: Layout) {
226-
let size = layout.size();
227-
let num_pages = size.div_ceil(PhysicalMemoryAllocator::PAGE_SIZE);
228-
let start_addr = ptr.as_ptr() as usize;
229-
assert!(
230-
start_addr % PhysicalMemoryAllocator::PAGE_SIZE == 0,
231-
"somehow start address of {start_addr} is not page aligned"
232-
);
233-
let start_page = start_addr / PhysicalMemoryAllocator::PAGE_SIZE;
234-
self.with_lock(|allocator| {
235-
allocator.allocator.free_contiguous(start_page, num_pages);
236-
});
237-
}
238-
}
239-
240-
unsafe fn nonnull_ptr_slice_from_addr_len(addr: usize, len_bytes: usize) -> NonNull<[u8]> {
241-
let ptr = addr as *mut u8;
242-
NonNull::new_unchecked(core::slice::from_raw_parts_mut(ptr, len_bytes))
243-
}
197+
// I'm not sure this is 100% correct, so I'm not doing it. In particular, I
198+
// worry that deallocate is incorrect because I'm not sure what the
199+
// characteristics of Layout are. It is better to be explicit that the physical
200+
// memory allocator deals with pages. If we want contiguous heap-like
201+
// allocation, we should implement a heap on top of physically contiguous
202+
// memory, or do something like slab allocation on top of physically contiguous
203+
// memory.
204+
//
205+
//
206+
// /// We implement the `Allocator` trait for `PhysicalMemoryAllocator`
207+
// /// so that we can use it for custom allocations for physically contiguous
208+
// /// memory.
209+
// unsafe impl Allocator for LockedPhysicalMemoryAllocator<'_> {
210+
// fn allocate(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> {
211+
// let size = layout.size();
212+
// let num_pages = size.div_ceil(PhysicalMemoryAllocator::PAGE_SIZE);
213+
214+
// let alignment = layout.align() as u64;
215+
// assert!(
216+
// alignment <= PhysicalMemoryAllocator::PAGE_SIZE as u64,
217+
// "alignment {alignment} must be <= page size {}. What the hell are we aligning???",
218+
// PhysicalMemoryAllocator::PAGE_SIZE,
219+
// );
220+
// let start_page = self.with_lock(|allocator| {
221+
// allocator
222+
// .allocator
223+
// .allocate_contiguous(num_pages)
224+
// .ok_or(AllocError)
225+
// })?;
226+
// let start_address = start_page * PhysicalMemoryAllocator::PAGE_SIZE;
227+
// let actual_size = num_pages * PhysicalMemoryAllocator::PAGE_SIZE;
228+
// let ptr = unsafe { nonnull_ptr_slice_from_addr_len(start_address, actual_size) };
229+
230+
// Ok(ptr)
231+
// }
232+
233+
// unsafe fn deallocate(&self, ptr: NonNull<u8>, layout: Layout) {
234+
// let size = layout.size();
235+
// let num_pages = size.div_ceil(PhysicalMemoryAllocator::PAGE_SIZE);
236+
// let start_addr = ptr.as_ptr() as usize;
237+
// assert!(
238+
// start_addr % PhysicalMemoryAllocator::PAGE_SIZE == 0,
239+
// "somehow start address of {start_addr} is not page aligned"
240+
// );
241+
// let start_page = start_addr / PhysicalMemoryAllocator::PAGE_SIZE;
242+
// self.with_lock(|allocator| {
243+
// allocator.allocator.free_contiguous(start_page, num_pages);
244+
// });
245+
// }
246+
// }
247+
248+
// unsafe fn nonnull_ptr_slice_from_addr_len(addr: usize, len_bytes: usize) -> NonNull<[u8]> {
249+
// let ptr = addr as *mut u8;
250+
// NonNull::new_unchecked(core::slice::from_raw_parts_mut(ptr, len_bytes))
251+
// }
244252

245253
unsafe impl<S: PageSize> FrameAllocator<S> for LockedPhysicalMemoryAllocator<'_> {
246254
fn allocate_frame(&mut self) -> Option<PhysFrame<S>> {

kernel/src/tests.rs

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -91,14 +91,6 @@ pub(crate) fn run_misc_tests() {
9191
serial_println!("vec at {:p}: {vec:?}", vec.as_slice());
9292
assert_eq!(vec.into_iter().sum::<u32>(), 45);
9393

94-
// Create a Box value with the `Allocator` API
95-
let my_box = Box::new_in(42, &memory::KERNEL_PHYSICAL_ALLOCATOR);
96-
serial_println!("Allocator alloc'ed my_box {my_box:?} at {my_box:p}");
97-
98-
drop(my_box);
99-
let other_box = Box::new_in(42, &memory::KERNEL_PHYSICAL_ALLOCATOR);
100-
serial_println!("Allocator alloc'ed other_box at {other_box:?} at {other_box:p}");
101-
10294
// Trigger a page fault, which should trigger a double fault if we don't
10395
// have a page fault handler.
10496
// unsafe {

0 commit comments

Comments
 (0)