From: Jian-Hong Pan
Sent: 25 July 2019 09:09 Each skb as the element in RX ring was expected with sized buffer 8216 (RTK_PCI_RX_BUF_SIZE) bytes. However, the skb buffer's true size is 16640 bytes for alignment after allocated, x86_64 for example. And, the difference will be enlarged 512 times (RTK_MAX_RX_DESC_NUM). To prevent that much wasted memory, this patch follows David's suggestion [1] and uses general buffer arrays, instead of skbs as the elements in RX ring.
...
for (i = 0; i < len; i++) {
skb = dev_alloc_skb(buf_sz);
if (!skb) {
buf = devm_kzalloc(rtwdev->dev, buf_sz, GFP_ATOMIC);
You should do this allocation somewhere than can sleep. So you don't need GFP_ATOMIC, making the allocate (and dma map) much less likely to fail. If they do fail using a smaller ring might be better than failing completely.
I suspect that buf_sz gets rounded up somewhat. Also you almost certainly want 'buf' to be cache-line aligned. I don't think devm_kzalloc() guarantees that at all.
While allocating all 512 buffers in one block (just over 4MB) is probably not a good idea, you may need to allocated (and dma map) then in groups.
David
- Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)