Skip to content

Commit ca6b978

Browse files
josh8551021ferruhy
authored andcommitted
net/gve: fix refill logic causing memory corruption
There is a seemingly mundane error in the RX refill path which can lead to major issues and ultimately program crashing. This error occurs as part of an edge case where the exact number of buffers the refill causes the ring to wrap around to 0. The current refill logic is split into two conditions: first, when the number of buffers to refill is greater than the number of buffers left in the ring before wraparound occurs; second, when the opposite is true, and there are enough buffers before wraparound to refill all buffers. In this edge case, the first condition erroneously uses a (<) condition to decide whether to wrap around, when it should have been (<=). In that case, the second condition would run and the tail pointer would be set to an invalid value (RING_SIZE). This causes a number of cascading failures. 1. The first issue rather mundane in that rxq->bufq_tail == RING_SIZE at the end of the refill, this will correct itself on the next refill without any sort of memory leak or corruption; 2. The second failure is that the head pointer would end up overrunning the tail because the last buffer that is refilled is refilled at sw_ring[RING_SIZE] instead of sw_ring[0]. This would cause the driver to give the application a stale mbuf, one that has been potentially freed or is otherwise stale; 3. The third failure comes from the fact that the software ring is being overrun. Because we directly use the sw_ring pointer to refill buffers, when sw_ring[RING_SIZE] is filled, a buffer overflow occurs. The overwritten data has the potential to be important data, and this can potentially cause the program to crash outright. This patch fixes the refill bug while greatly simplifying the logic so that it is much less error-prone. Fixes: 45da16b ("net/gve: support basic Rx data path for DQO") Cc: [email protected] Signed-off-by: Joshua Washington <[email protected]> Reviewed-by: Rushil Gupta <[email protected]> Reviewed-by: Praveen Kaligineedi <[email protected]>
1 parent 4f24ef0 commit ca6b978

File tree

1 file changed

+16
-46
lines changed

1 file changed

+16
-46
lines changed

drivers/net/gve/gve_rx_dqo.c

Lines changed: 16 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -11,66 +11,36 @@
1111
static inline void
1212
gve_rx_refill_dqo(struct gve_rx_queue *rxq)
1313
{
14-
volatile struct gve_rx_desc_dqo *rx_buf_ring;
1514
volatile struct gve_rx_desc_dqo *rx_buf_desc;
1615
struct rte_mbuf *nmb[rxq->nb_rx_hold];
1716
uint16_t nb_refill = rxq->nb_rx_hold;
18-
uint16_t nb_desc = rxq->nb_rx_desc;
1917
uint16_t next_avail = rxq->bufq_tail;
2018
struct rte_eth_dev *dev;
2119
uint64_t dma_addr;
22-
uint16_t delta;
2320
int i;
2421

2522
if (rxq->nb_rx_hold < rxq->free_thresh)
2623
return;
2724

28-
rx_buf_ring = rxq->rx_ring;
29-
delta = nb_desc - next_avail;
30-
if (unlikely(delta < nb_refill)) {
31-
if (likely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, delta) == 0)) {
32-
for (i = 0; i < delta; i++) {
33-
rx_buf_desc = &rx_buf_ring[next_avail + i];
34-
rxq->sw_ring[next_avail + i] = nmb[i];
35-
dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
36-
rx_buf_desc->header_buf_addr = 0;
37-
rx_buf_desc->buf_addr = dma_addr;
38-
}
39-
nb_refill -= delta;
40-
next_avail = 0;
41-
rxq->nb_rx_hold -= delta;
42-
} else {
43-
rxq->stats.no_mbufs_bulk++;
44-
rxq->stats.no_mbufs += nb_desc - next_avail;
45-
dev = &rte_eth_devices[rxq->port_id];
46-
dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
47-
PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
48-
rxq->port_id, rxq->queue_id);
49-
return;
50-
}
25+
if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, nb_refill))) {
26+
rxq->stats.no_mbufs_bulk++;
27+
rxq->stats.no_mbufs += nb_refill;
28+
dev = &rte_eth_devices[rxq->port_id];
29+
dev->data->rx_mbuf_alloc_failed += nb_refill;
30+
PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
31+
rxq->port_id, rxq->queue_id);
32+
return;
5133
}
5234

53-
if (nb_desc - next_avail >= nb_refill) {
54-
if (likely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, nb_refill) == 0)) {
55-
for (i = 0; i < nb_refill; i++) {
56-
rx_buf_desc = &rx_buf_ring[next_avail + i];
57-
rxq->sw_ring[next_avail + i] = nmb[i];
58-
dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
59-
rx_buf_desc->header_buf_addr = 0;
60-
rx_buf_desc->buf_addr = dma_addr;
61-
}
62-
next_avail += nb_refill;
63-
rxq->nb_rx_hold -= nb_refill;
64-
} else {
65-
rxq->stats.no_mbufs_bulk++;
66-
rxq->stats.no_mbufs += nb_desc - next_avail;
67-
dev = &rte_eth_devices[rxq->port_id];
68-
dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
69-
PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
70-
rxq->port_id, rxq->queue_id);
71-
}
35+
for (i = 0; i < nb_refill; i++) {
36+
rx_buf_desc = &rxq->rx_ring[next_avail];
37+
rxq->sw_ring[next_avail] = nmb[i];
38+
dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
39+
rx_buf_desc->header_buf_addr = 0;
40+
rx_buf_desc->buf_addr = dma_addr;
41+
next_avail = (next_avail + 1) & (rxq->nb_rx_desc - 1);
7242
}
73-
43+
rxq->nb_rx_hold -= nb_refill;
7444
rte_write32(next_avail, rxq->qrx_tail);
7545

7646
rxq->bufq_tail = next_avail;

0 commit comments

Comments
 (0)