1.\" Copyright (c) 2003 2.\" Fraunhofer Institute for Open Communication Systems (FhG Fokus). 3.\" All rights reserved. 4.\" 5.\" Redistribution and use in source and binary forms, with or without 6.\" modification, are permitted provided that the following conditions 7.\" are met: 8.\" 1. Redistributions of source code must retain the above copyright 9.\" notice, this list of conditions and the following disclaimer. 10.\" 2. Redistributions in binary form must reproduce the above copyright 11.\" notice, this list of conditions and the following disclaimer in the 12.\" documentation and/or other materials provided with the distribution. 13.\" 14.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND 15.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 16.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 17.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE 18.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 19.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 20.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 21.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 22.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 23.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 24.\" SUCH DAMAGE. 25.\" 26.\" Author: Hartmut Brandt <harti@FreeBSD.org> 27.\"
| 1.\" Copyright (c) 2003 2.\" Fraunhofer Institute for Open Communication Systems (FhG Fokus). 3.\" All rights reserved. 4.\" 5.\" Redistribution and use in source and binary forms, with or without 6.\" modification, are permitted provided that the following conditions 7.\" are met: 8.\" 1. Redistributions of source code must retain the above copyright 9.\" notice, this list of conditions and the following disclaimer. 10.\" 2. Redistributions in binary form must reproduce the above copyright 11.\" notice, this list of conditions and the following disclaimer in the 12.\" documentation and/or other materials provided with the distribution. 13.\" 14.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND 15.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 16.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 17.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE 18.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 19.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 20.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 21.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 22.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 23.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 24.\" SUCH DAMAGE. 25.\" 26.\" Author: Hartmut Brandt <harti@FreeBSD.org> 27.\"
|
28.\" $FreeBSD: head/share/man/man9/mbpool.9 131736 2004-07-07 07:56:58Z ru $
| 28.\" $FreeBSD: head/share/man/man9/mbpool.9 208027 2010-05-13 12:07:55Z uqs $
|
29.\" 30.Dd July 15, 2003 31.Dt MBPOOL 9 32.Os 33.Sh NAME 34.Nm mbpool 35.Nd "buffer pools for network interfaces" 36.Sh SYNOPSIS 37.In sys/types.h 38.In machine/bus.h 39.In sys/mbpool.h 40.Vt struct mbpool ; 41.Ft int 42.Fo mbp_create 43.Fa "struct mbpool **mbp" "const char *name" "bus_dma_tag_t dmat" 44.Fa "u_int max_pages" "size_t page_size" "size_t chunk_size" 45.Fc 46.Ft void 47.Fn mbp_destroy "struct mbpool *mbp" 48.Ft "void *" 49.Fn mbp_alloc "struct mbpool *mbp" "bus_addr_t *pa" "uint32_t *hp" 50.Ft void 51.Fn mbp_free "struct mbpool *mbp" "void *p" 52.Ft void 53.Fn mbp_ext_free "void *" "void *" 54.Ft void 55.Fn mbp_card_free "struct mbpool *mbp" 56.Ft void 57.Fn mbp_count "struct mbpool *mbp" "u_int *used" "u_int *card" "u_int *free" 58.Ft "void *" 59.Fn mbp_get "struct mbpool *mbp" "uint32_t h" 60.Ft "void *" 61.Fn mbp_get_keep "struct mbpool *mbp" "uint32_t h" 62.Ft void 63.Fo mbp_sync 64.Fa "struct mbpool *mbp" "uint32_t h" "bus_addr_t off" "bus_size_t len" 65.Fa "u_int op" 66.Fc 67.Pp 68.Fn MODULE_DEPEND "your_module" "libmbpool" 1 1 1 69.Pp 70.Cd "options LIBMBPOOL" 71.Sh DESCRIPTION 72Mbuf pools are intended to help drivers for interface cards that need huge 73amounts of receive buffers, and additionally provides a mapping between these 74buffers and 32-bit handles. 75.Pp 76An example of these cards are the Fore/Marconi ForeRunnerHE cards. 77These 78employ up to 8 receive groups, each with two buffer pools, each of which 79can contain up to 8192. 80This gives a total maximum number of more than 81100000 buffers. 82Even with a more moderate configuration the card eats several 83thousand buffers. 84Each of these buffers must be mapped for DMA. 85While for 86machines without an IOMMU and with lesser than 4GByte memory this is not 87a problem, for other machines this may quickly eat up all available IOMMU 88address space and/or bounce buffers. 89On sparc64, the default I/O page size 90is 16k, so mapping a simple mbuf wastes 31/32 of the address space. 91.Pp 92Another problem with most of these cards is that they support putting a 32-bit 93handle into the buffer descriptor together with the physical address. 94This handle is reflected back to the driver when the buffer is filled, and 95assists the driver in finding the buffer in host memory. 96For 32-bit machines, 97the virtual address of the buffer is usually used as the handle. 98This does not 99work for 64-bit machines for obvious reasons, so a mapping is needed between 100these handles and the buffers. 101This mapping should be possible without 102searching lists and the like. 103.Pp 104An mbuf pool overcomes both problems by allocating DMA-able memory page wise 105with a per-pool configurable page size. 106Each page is divided into a number of 107equally-sized chunks, the last 108.Dv MBPOOL_TRAILER_SIZE 109of which are used by the pool code (4 bytes). 110The rest of each chunk is 111usable as a buffer. 112There is a per-pool limit on pages that will be allocated. 113.Pp 114Additionally, the code manages two flags for each buffer: 115.Dq on-card 116and 117.Dq used . 118A buffer may be in one of three states: 119.Bl -tag -width "on-card" 120.It free 121None of the flags is set. 122.It on-card 123Both flags are set. 124The buffer is assumed to be handed over to the card and 125waiting to be filled. 126.It used 127The buffer was returned by the card and is now travelling through the system. 128.El 129.Pp 130A pool is created with 131.Fn mbp_create . 132This call specifies a DMA tag 133.Fa dmat 134to be used to create and map the memory pages via 135.Xr bus_dmamem_alloc 9 . 136The 137.Fa chunk_size 138includes the pool overhead. 139It means that to get buffers for 5 ATM cells 140(240 bytes), a chunk size of 256 should be specified. 141This results in 12 unused 142bytes between the buffer, and the pool overhead of four byte. 143The total 144maximum number of buffers in a pool is 145.Fa max_pages 146* 147.Fa ( page_size 148/ 149.Fa chunk_size ) . 150The maximum value for 151.Fa max_pages 152is 2^14-1 (16383) and the maximum of 153.Fa page_size 154/ 155.Fa chunk_size 156is 2^9 (512). 157If the call is successful, a pointer to a newly allocated 158.Vt "struct mbpool" 159is set into the variable pointed to by 160.Fa mpb . 161.Pp 162A pool is destroyed with 163.Fn mbp_destroy . 164This frees all pages and the pool structure itself. 165If compiled with 166.Dv DIAGNOSTICS , 167the code checks that all buffers are free. 168If not, a warning message is issued 169to the console. 170.Pp 171A buffer is allocated with 172.Fn mbp_alloc . 173This returns the virtual address of the buffer and stores the physical 174address into the variable pointed to by 175.Fa pa . 176The handle is stored into the variable pointed to by 177.Fa hp . 178The two most significant bits and the 7 least significant bits of the handle 179are unused by the pool code and may be used by the caller. 180These are 181automatically stripped when passing a handle to one of the other functions. 182If a buffer cannot be allocated (either because the maximum number of pages 183is reached, no memory is available or the memory cannot be mapped), 184.Dv NULL 185is returned. 186If a buffer could be allocated, it is in the 187.Dq on-card 188state. 189.Pp 190When the buffer is returned by the card, the driver calls 191.Fn mbp_get 192with the handle. 193This function returns the virtual address of the buffer 194and clears the 195.Dq on-card 196bit. 197The buffer is now in the 198.Dq used 199state. 200The function 201.Fn mbp_get_keep 202differs from 203.Fn mbp_get 204in that it does not clear the 205.Dq on-card 206bit. 207This can be used for buffers 208that are returned 209.Dq partially 210by the card. 211.Pp 212A buffer is freed by calling 213.Fn mbp_free 214with the virtual address of the buffer. 215This clears the 216.Dq used 217bit, and 218puts the buffer on the free list of the pool. 219Note that free buffers 220are NOT returned to the system. 221The function 222.Fn mbp_ext_free 223can be given to 224.Fn m_extadd 225as the free function. 226The user argument must be the pointer to 227the pool. 228.Pp 229Before using the contents of a buffer returned by the card, the driver 230must call 231.Fn mbp_sync 232with the appropriate parameters. 233This results in a call to 234.Xr bus_dmamap_sync 9 235for the buffer. 236.Pp 237All buffers in the pool that are currently in the 238.Dq on-card 239state can be freed 240with a call to 241.Fn mbp_card_free . 242This may be called by the driver when it stops the interface. 243Buffers in the 244.Dq used 245state are not freed by this call. 246.Pp 247For debugging it is possible to call 248.Fn mbp_count . 249This returns the number of buffers in the 250.Dq used 251and 252.Dq on-card 253states and 254the number of buffers on the free list. 255.Sh SEE ALSO 256.Xr mbuf 9
| 29.\" 30.Dd July 15, 2003 31.Dt MBPOOL 9 32.Os 33.Sh NAME 34.Nm mbpool 35.Nd "buffer pools for network interfaces" 36.Sh SYNOPSIS 37.In sys/types.h 38.In machine/bus.h 39.In sys/mbpool.h 40.Vt struct mbpool ; 41.Ft int 42.Fo mbp_create 43.Fa "struct mbpool **mbp" "const char *name" "bus_dma_tag_t dmat" 44.Fa "u_int max_pages" "size_t page_size" "size_t chunk_size" 45.Fc 46.Ft void 47.Fn mbp_destroy "struct mbpool *mbp" 48.Ft "void *" 49.Fn mbp_alloc "struct mbpool *mbp" "bus_addr_t *pa" "uint32_t *hp" 50.Ft void 51.Fn mbp_free "struct mbpool *mbp" "void *p" 52.Ft void 53.Fn mbp_ext_free "void *" "void *" 54.Ft void 55.Fn mbp_card_free "struct mbpool *mbp" 56.Ft void 57.Fn mbp_count "struct mbpool *mbp" "u_int *used" "u_int *card" "u_int *free" 58.Ft "void *" 59.Fn mbp_get "struct mbpool *mbp" "uint32_t h" 60.Ft "void *" 61.Fn mbp_get_keep "struct mbpool *mbp" "uint32_t h" 62.Ft void 63.Fo mbp_sync 64.Fa "struct mbpool *mbp" "uint32_t h" "bus_addr_t off" "bus_size_t len" 65.Fa "u_int op" 66.Fc 67.Pp 68.Fn MODULE_DEPEND "your_module" "libmbpool" 1 1 1 69.Pp 70.Cd "options LIBMBPOOL" 71.Sh DESCRIPTION 72Mbuf pools are intended to help drivers for interface cards that need huge 73amounts of receive buffers, and additionally provides a mapping between these 74buffers and 32-bit handles. 75.Pp 76An example of these cards are the Fore/Marconi ForeRunnerHE cards. 77These 78employ up to 8 receive groups, each with two buffer pools, each of which 79can contain up to 8192. 80This gives a total maximum number of more than 81100000 buffers. 82Even with a more moderate configuration the card eats several 83thousand buffers. 84Each of these buffers must be mapped for DMA. 85While for 86machines without an IOMMU and with lesser than 4GByte memory this is not 87a problem, for other machines this may quickly eat up all available IOMMU 88address space and/or bounce buffers. 89On sparc64, the default I/O page size 90is 16k, so mapping a simple mbuf wastes 31/32 of the address space. 91.Pp 92Another problem with most of these cards is that they support putting a 32-bit 93handle into the buffer descriptor together with the physical address. 94This handle is reflected back to the driver when the buffer is filled, and 95assists the driver in finding the buffer in host memory. 96For 32-bit machines, 97the virtual address of the buffer is usually used as the handle. 98This does not 99work for 64-bit machines for obvious reasons, so a mapping is needed between 100these handles and the buffers. 101This mapping should be possible without 102searching lists and the like. 103.Pp 104An mbuf pool overcomes both problems by allocating DMA-able memory page wise 105with a per-pool configurable page size. 106Each page is divided into a number of 107equally-sized chunks, the last 108.Dv MBPOOL_TRAILER_SIZE 109of which are used by the pool code (4 bytes). 110The rest of each chunk is 111usable as a buffer. 112There is a per-pool limit on pages that will be allocated. 113.Pp 114Additionally, the code manages two flags for each buffer: 115.Dq on-card 116and 117.Dq used . 118A buffer may be in one of three states: 119.Bl -tag -width "on-card" 120.It free 121None of the flags is set. 122.It on-card 123Both flags are set. 124The buffer is assumed to be handed over to the card and 125waiting to be filled. 126.It used 127The buffer was returned by the card and is now travelling through the system. 128.El 129.Pp 130A pool is created with 131.Fn mbp_create . 132This call specifies a DMA tag 133.Fa dmat 134to be used to create and map the memory pages via 135.Xr bus_dmamem_alloc 9 . 136The 137.Fa chunk_size 138includes the pool overhead. 139It means that to get buffers for 5 ATM cells 140(240 bytes), a chunk size of 256 should be specified. 141This results in 12 unused 142bytes between the buffer, and the pool overhead of four byte. 143The total 144maximum number of buffers in a pool is 145.Fa max_pages 146* 147.Fa ( page_size 148/ 149.Fa chunk_size ) . 150The maximum value for 151.Fa max_pages 152is 2^14-1 (16383) and the maximum of 153.Fa page_size 154/ 155.Fa chunk_size 156is 2^9 (512). 157If the call is successful, a pointer to a newly allocated 158.Vt "struct mbpool" 159is set into the variable pointed to by 160.Fa mpb . 161.Pp 162A pool is destroyed with 163.Fn mbp_destroy . 164This frees all pages and the pool structure itself. 165If compiled with 166.Dv DIAGNOSTICS , 167the code checks that all buffers are free. 168If not, a warning message is issued 169to the console. 170.Pp 171A buffer is allocated with 172.Fn mbp_alloc . 173This returns the virtual address of the buffer and stores the physical 174address into the variable pointed to by 175.Fa pa . 176The handle is stored into the variable pointed to by 177.Fa hp . 178The two most significant bits and the 7 least significant bits of the handle 179are unused by the pool code and may be used by the caller. 180These are 181automatically stripped when passing a handle to one of the other functions. 182If a buffer cannot be allocated (either because the maximum number of pages 183is reached, no memory is available or the memory cannot be mapped), 184.Dv NULL 185is returned. 186If a buffer could be allocated, it is in the 187.Dq on-card 188state. 189.Pp 190When the buffer is returned by the card, the driver calls 191.Fn mbp_get 192with the handle. 193This function returns the virtual address of the buffer 194and clears the 195.Dq on-card 196bit. 197The buffer is now in the 198.Dq used 199state. 200The function 201.Fn mbp_get_keep 202differs from 203.Fn mbp_get 204in that it does not clear the 205.Dq on-card 206bit. 207This can be used for buffers 208that are returned 209.Dq partially 210by the card. 211.Pp 212A buffer is freed by calling 213.Fn mbp_free 214with the virtual address of the buffer. 215This clears the 216.Dq used 217bit, and 218puts the buffer on the free list of the pool. 219Note that free buffers 220are NOT returned to the system. 221The function 222.Fn mbp_ext_free 223can be given to 224.Fn m_extadd 225as the free function. 226The user argument must be the pointer to 227the pool. 228.Pp 229Before using the contents of a buffer returned by the card, the driver 230must call 231.Fn mbp_sync 232with the appropriate parameters. 233This results in a call to 234.Xr bus_dmamap_sync 9 235for the buffer. 236.Pp 237All buffers in the pool that are currently in the 238.Dq on-card 239state can be freed 240with a call to 241.Fn mbp_card_free . 242This may be called by the driver when it stops the interface. 243Buffers in the 244.Dq used 245state are not freed by this call. 246.Pp 247For debugging it is possible to call 248.Fn mbp_count . 249This returns the number of buffers in the 250.Dq used 251and 252.Dq on-card 253states and 254the number of buffers on the free list. 255.Sh SEE ALSO 256.Xr mbuf 9
|
| 257.Sh AUTHORS 258.An Harti Brandt Aq harti@FreeBSD.org
|
257.Sh CAVEATS 258The function 259.Fn mbp_sync 260is currently a no-op because 261.Xr bus_dmamap_sync 9 262is missing the offset and length parameters.
| 259.Sh CAVEATS 260The function 261.Fn mbp_sync 262is currently a no-op because 263.Xr bus_dmamap_sync 9 264is missing the offset and length parameters.
|
263.Sh AUTHORS 264.An Harti Brandt Aq harti@FreeBSD.org
| |
| |