t5fw_cfg_fpga.txt revision 298976
1252661Snp# Chelsio T5 Factory Default configuration file. 2252661Snp# 3252661Snp# Copyright (C) 2010-2013 Chelsio Communications. All rights reserved. 4252661Snp# 5252661Snp# DO NOT MODIFY THIS FILE UNDER ANY CIRCUMSTANCES. MODIFICATION OF 6252661Snp# THIS FILE WILL RESULT IN A NON-FUNCTIONAL T4 ADAPTER AND MAY RESULT 7252661Snp# IN PHYSICAL DAMAGE TO T4 ADAPTERS. 8252661Snp 9252661Snp# This file provides the default, power-on configuration for 4-port T4-based 10252661Snp# adapters shipped from the factory. These defaults are designed to address 11252661Snp# the needs of the vast majority of T4 customers. The basic idea is to have 12252661Snp# a default configuration which allows a customer to plug a T4 adapter in and 13252661Snp# have it work regardless of OS, driver or application except in the most 14252661Snp# unusual and/or demanding customer applications. 15252661Snp# 16252661Snp# Many of the T4 resources which are described by this configuration are 17252661Snp# finite. This requires balancing the configuration/operation needs of 18252661Snp# device drivers across OSes and a large number of customer application. 19252661Snp# 20298976Spfg# Some of the more important resources to allocate and their constaints are: 21252661Snp# 1. Virtual Interfaces: 128. 22252661Snp# 2. Ingress Queues with Free Lists: 1024. PCI-E SR-IOV Virtual Functions 23252661Snp# must use a power of 2 Ingress Queues. 24252661Snp# 3. Egress Queues: 128K. PCI-E SR-IOV Virtual Functions must use a 25252661Snp# power of 2 Egress Queues. 26252661Snp# 4. MSI-X Vectors: 1088. A complication here is that the PCI-E SR-IOV 27252661Snp# Virtual Functions based off of a Physical Function all get the 28252661Snp# same umber of MSI-X Vectors as the base Physical Function. 29252661Snp# Additionally, regardless of whether Virtual Functions are enabled or 30252661Snp# not, their MSI-X "needs" are counted by the PCI-E implementation. 31252661Snp# And finally, all Physical Funcations capable of supporting Virtual 32252661Snp# Functions (PF0-3) must have the same number of configured TotalVFs in 33252661Snp# their SR-IOV Capabilities. 34252661Snp# 5. Multi-Port Support (MPS) TCAM: 336 entries to support MAC destination 35252661Snp# address matching on Ingress Packets. 36252661Snp# 37252661Snp# Some of the important OS/Driver resource needs are: 38252661Snp# 6. Some OS Drivers will manage all resources through a single Physical 39252661Snp# Function (currently PF0 but it could be any Physical Function). Thus, 40252661Snp# this "Unified PF" will need to have enough resources allocated to it 41252661Snp# to allow for this. And because of the MSI-X resource allocation 42252661Snp# constraints mentioned above, this probably means we'll either have to 43252661Snp# severely limit the TotalVFs if we continue to use PF0 as the Unified PF 44252661Snp# or we'll need to move the Unified PF into the PF4-7 range since those 45252661Snp# Physical Functions don't have any Virtual Functions associated with 46252661Snp# them. 47252661Snp# 7. Some OS Drivers will manage different ports and functions (NIC, 48252661Snp# storage, etc.) on different Physical Functions. For example, NIC 49252661Snp# functions for ports 0-3 on PF0-3, FCoE on PF4, iSCSI on PF5, etc. 50252661Snp# 51252661Snp# Some of the customer application needs which need to be accommodated: 52252661Snp# 8. Some customers will want to support large CPU count systems with 53252661Snp# good scaling. Thus, we'll need to accommodate a number of 54252661Snp# Ingress Queues and MSI-X Vectors to allow up to some number of CPUs 55252661Snp# to be involved per port and per application function. For example, 56252661Snp# in the case where all ports and application functions will be 57252661Snp# managed via a single Unified PF and we want to accommodate scaling up 58252661Snp# to 8 CPUs, we would want: 59252661Snp# 60252661Snp# 4 ports * 61252661Snp# 3 application functions (NIC, FCoE, iSCSI) per port * 62252661Snp# 8 Ingress Queue/MSI-X Vectors per application function 63252661Snp# 64252661Snp# for a total of 96 Ingress Queues and MSI-X Vectors on the Unified PF. 65252661Snp# (Plus a few for Firmware Event Queues, etc.) 66252661Snp# 67252661Snp# 9. Some customers will want to use T4's PCI-E SR-IOV Capability to allow 68252661Snp# Virtual Machines to directly access T4 functionality via SR-IOV 69252661Snp# Virtual Functions and "PCI Device Passthrough" -- this is especially 70252661Snp# true for the NIC application functionality. (Note that there is 71252661Snp# currently no ability to use the TOE, FCoE, iSCSI, etc. via Virtual 72252661Snp# Functions so this is in fact solely limited to NIC.) 73252661Snp# 74252661Snp 75252661Snp 76252661Snp# Global configuration settings. 77252661Snp# 78252661Snp[global] 79252661Snp rss_glb_config_mode = basicvirtual 80252661Snp rss_glb_config_options = tnlmapen,hashtoeplitz,tnlalllkp 81252661Snp 82252661Snp # PCIE_MA_RSP register 83252661Snp pcie_ma_rsp_timervalue = 500 # the timer value in units of us 84252661Snp reg[0x59c4] = 0x3/0x3 # enable the timers 85252661Snp 86252661Snp # PL_TIMEOUT register 87252661Snp pl_timeout_value = 200 # the timeout value in units of us 88252661Snp 89252661Snp # The following Scatter Gather Engine (SGE) settings assume a 4KB Host 90252661Snp # Page Size and a 64B L1 Cache Line Size. It programs the 91252661Snp # EgrStatusPageSize and IngPadBoundary to 64B and the PktShift to 2. 92252661Snp # If a Master PF Driver finds itself on a machine with different 93252661Snp # parameters, then the Master PF Driver is responsible for initializing 94252661Snp # these parameters to appropriate values. 95252661Snp # 96252661Snp # Notes: 97252661Snp # 1. The Free List Buffer Sizes below are raw and the firmware will 98252661Snp # round them up to the Ingress Padding Boundary. 99252661Snp # 2. The SGE Timer Values below are expressed below in microseconds. 100252661Snp # The firmware will convert these values to Core Clock Ticks when 101252661Snp # it processes the configuration parameters. 102252661Snp # 103252661Snp reg[0x1008] = 0x40810/0x21c70 # SGE_CONTROL 104252661Snp reg[0x100c] = 0x22222222 # SGE_HOST_PAGE_SIZE 105252661Snp reg[0x10a0] = 0x01040810 # SGE_INGRESS_RX_THRESHOLD 106252661Snp reg[0x1044] = 4096 # SGE_FL_BUFFER_SIZE0 107252661Snp reg[0x1048] = 65536 # SGE_FL_BUFFER_SIZE1 108252661Snp reg[0x104c] = 1536 # SGE_FL_BUFFER_SIZE2 109252661Snp reg[0x1050] = 9024 # SGE_FL_BUFFER_SIZE3 110252661Snp reg[0x1054] = 9216 # SGE_FL_BUFFER_SIZE4 111252661Snp reg[0x1058] = 2048 # SGE_FL_BUFFER_SIZE5 112252661Snp reg[0x105c] = 128 # SGE_FL_BUFFER_SIZE6 113252661Snp reg[0x1060] = 8192 # SGE_FL_BUFFER_SIZE7 114252661Snp reg[0x1064] = 16384 # SGE_FL_BUFFER_SIZE8 115252661Snp reg[0x10a4] = 0xa000a000/0xf000f000 # SGE_DBFIFO_STATUS 116252661Snp reg[0x10a8] = 0x402000/0x402000 # SGE_DOORBELL_CONTROL 117252661Snp 118252661Snp # SGE_THROTTLE_CONTROL 119252661Snp bar2throttlecount = 500 # bar2throttlecount in us 120252661Snp 121252661Snp sge_timer_value = 5, 10, 20, 50, 100, 200 # SGE_TIMER_VALUE* in usecs 122252661Snp 123252661Snp 124252661Snp reg[0x1124] = 0x00000400/0x00000400 # SGE_CONTROL2, enable VFIFO; if 125252661Snp # SGE_VFIFO_SIZE is not set, then 126252661Snp # firmware will set it up in function 127252661Snp # of number of egress queues used 128252661Snp 129252661Snp reg[0x1130] = 0x00d5ffeb # SGE_DBP_FETCH_THRESHOLD, fetch 130252661Snp # threshold set to queue depth 131252661Snp # minus 128-entries for FL and HP 132252661Snp # queues, and 0xfff for LP which 133252661Snp # prompts the firmware to set it up 134252661Snp # in function of egress queues 135252661Snp # used 136252661Snp 137252661Snp reg[0x113c] = 0x0002ffc0 # SGE_VFIFO_SIZE, set to 0x2ffc0 which 138252661Snp # prompts the firmware to set it up in 139252661Snp # function of number of egress queues 140252661Snp # used 141252661Snp 142252661Snp reg[0x7dc0] = 0x062f8849 # TP_SHIFT_CNT 143252661Snp 144252661Snp # Selection of tuples for LE filter lookup, fields (and widths which 145252661Snp # must sum to <= 36): { IP Fragment (1), MPS Match Type (3), 146252661Snp # IP Protocol (8), [Inner] VLAN (17), Port (3), FCoE (1) } 147252661Snp # 148252661Snp filterMode = fragmentation, mpshittype, protocol, vlan, port, fcoe, srvrsram 149252661Snp 150252661Snp # Percentage of dynamic memory (in either the EDRAM or external MEM) 151252661Snp # to use for TP RX payload 152274351Snp tp_pmrx = 30, 512 153252661Snp 154252661Snp # TP RX payload page size 155252661Snp tp_pmrx_pagesize = 64K 156252661Snp 157252661Snp # TP number of RX channels 158252661Snp tp_nrxch = 0 # 0 (auto) = 1 159252661Snp 160252661Snp # Percentage of dynamic memory (in either the EDRAM or external MEM) 161252661Snp # to use for TP TX payload 162274351Snp tp_pmtx = 50, 512 163252661Snp 164252661Snp # TP TX payload page size 165252661Snp tp_pmtx_pagesize = 64K 166252661Snp 167252661Snp # TP number of TX channels 168252661Snp tp_ntxch = 0 # 0 (auto) = equal number of ports 169252661Snp 170252661Snp reg[0x19c04] = 0x00400000/0x00400000 # LE Server SRAM Enable 171252661Snp 172252661Snp# Some "definitions" to make the rest of this a bit more readable. We support 173252661Snp# 4 ports, 3 functions (NIC, FCoE and iSCSI), scaling up to 8 "CPU Queue Sets" 174252661Snp# per function per port ... 175252661Snp# 176252661Snp# NMSIX = 1088 # available MSI-X Vectors 177252661Snp# NVI = 128 # available Virtual Interfaces 178252661Snp# NMPSTCAM = 336 # MPS TCAM entries 179252661Snp# 180252661Snp# NPORTS = 4 # ports 181252661Snp# NCPUS = 8 # CPUs we want to support scalably 182252661Snp# NFUNCS = 3 # functions per port (NIC, FCoE, iSCSI) 183252661Snp 184252661Snp# Breakdown of Virtual Interface/Queue/Interrupt resources for the "Unified 185252661Snp# PF" which many OS Drivers will use to manage most or all functions. 186252661Snp# 187252661Snp# Each Ingress Queue can use one MSI-X interrupt but some Ingress Queues can 188252661Snp# use Forwarded Interrupt Ingress Queues. For these latter, an Ingress Queue 189252661Snp# would be created and the Queue ID of a Forwarded Interrupt Ingress Queue 190252661Snp# will be specified as the "Ingress Queue Asynchronous Destination Index." 191252661Snp# Thus, the number of MSI-X Vectors assigned to the Unified PF will be less 192252661Snp# than or equal to the number of Ingress Queues ... 193252661Snp# 194252661Snp# NVI_NIC = 4 # NIC access to NPORTS 195252661Snp# NFLIQ_NIC = 32 # NIC Ingress Queues with Free Lists 196252661Snp# NETHCTRL_NIC = 32 # NIC Ethernet Control/TX Queues 197252661Snp# NEQ_NIC = 64 # NIC Egress Queues (FL, ETHCTRL/TX) 198252661Snp# NMPSTCAM_NIC = 16 # NIC MPS TCAM Entries (NPORTS*4) 199252661Snp# NMSIX_NIC = 32 # NIC MSI-X Interrupt Vectors (FLIQ) 200252661Snp# 201252661Snp# NVI_OFLD = 0 # Offload uses NIC function to access ports 202252661Snp# NFLIQ_OFLD = 16 # Offload Ingress Queues with Free Lists 203252661Snp# NETHCTRL_OFLD = 0 # Offload Ethernet Control/TX Queues 204252661Snp# NEQ_OFLD = 16 # Offload Egress Queues (FL) 205252661Snp# NMPSTCAM_OFLD = 0 # Offload MPS TCAM Entries (uses NIC's) 206252661Snp# NMSIX_OFLD = 16 # Offload MSI-X Interrupt Vectors (FLIQ) 207252661Snp# 208252661Snp# NVI_RDMA = 0 # RDMA uses NIC function to access ports 209252661Snp# NFLIQ_RDMA = 4 # RDMA Ingress Queues with Free Lists 210252661Snp# NETHCTRL_RDMA = 0 # RDMA Ethernet Control/TX Queues 211252661Snp# NEQ_RDMA = 4 # RDMA Egress Queues (FL) 212252661Snp# NMPSTCAM_RDMA = 0 # RDMA MPS TCAM Entries (uses NIC's) 213252661Snp# NMSIX_RDMA = 4 # RDMA MSI-X Interrupt Vectors (FLIQ) 214252661Snp# 215252661Snp# NEQ_WD = 128 # Wire Direct TX Queues and FLs 216252661Snp# NETHCTRL_WD = 64 # Wire Direct TX Queues 217252661Snp# NFLIQ_WD = 64 ` # Wire Direct Ingress Queues with Free Lists 218252661Snp# 219252661Snp# NVI_ISCSI = 4 # ISCSI access to NPORTS 220252661Snp# NFLIQ_ISCSI = 4 # ISCSI Ingress Queues with Free Lists 221252661Snp# NETHCTRL_ISCSI = 0 # ISCSI Ethernet Control/TX Queues 222252661Snp# NEQ_ISCSI = 4 # ISCSI Egress Queues (FL) 223252661Snp# NMPSTCAM_ISCSI = 4 # ISCSI MPS TCAM Entries (NPORTS) 224252661Snp# NMSIX_ISCSI = 4 # ISCSI MSI-X Interrupt Vectors (FLIQ) 225252661Snp# 226252661Snp# NVI_FCOE = 4 # FCOE access to NPORTS 227252661Snp# NFLIQ_FCOE = 34 # FCOE Ingress Queues with Free Lists 228252661Snp# NETHCTRL_FCOE = 32 # FCOE Ethernet Control/TX Queues 229252661Snp# NEQ_FCOE = 66 # FCOE Egress Queues (FL) 230252661Snp# NMPSTCAM_FCOE = 32 # FCOE MPS TCAM Entries (NPORTS) 231252661Snp# NMSIX_FCOE = 34 # FCOE MSI-X Interrupt Vectors (FLIQ) 232252661Snp 233252661Snp# Two extra Ingress Queues per function for Firmware Events and Forwarded 234252661Snp# Interrupts, and two extra interrupts per function for Firmware Events (or a 235252661Snp# Forwarded Interrupt Queue) and General Interrupts per function. 236252661Snp# 237252661Snp# NFLIQ_EXTRA = 6 # "extra" Ingress Queues 2*NFUNCS (Firmware and 238252661Snp# # Forwarded Interrupts 239252661Snp# NMSIX_EXTRA = 6 # extra interrupts 2*NFUNCS (Firmware and 240252661Snp# # General Interrupts 241252661Snp 242252661Snp# Microsoft HyperV resources. The HyperV Virtual Ingress Queues will have 243252661Snp# their interrupts forwarded to another set of Forwarded Interrupt Queues. 244252661Snp# 245252661Snp# NVI_HYPERV = 16 # VMs we want to support 246252661Snp# NVIIQ_HYPERV = 2 # Virtual Ingress Queues with Free Lists per VM 247252661Snp# NFLIQ_HYPERV = 40 # VIQs + NCPUS Forwarded Interrupt Queues 248252661Snp# NEQ_HYPERV = 32 # VIQs Free Lists 249252661Snp# NMPSTCAM_HYPERV = 16 # MPS TCAM Entries (NVI_HYPERV) 250252661Snp# NMSIX_HYPERV = 8 # NCPUS Forwarded Interrupt Queues 251252661Snp 252252661Snp# Adding all of the above Unified PF resource needs together: (NIC + OFLD + 253252661Snp# RDMA + ISCSI + FCOE + EXTRA + HYPERV) 254252661Snp# 255252661Snp# NVI_UNIFIED = 28 256252661Snp# NFLIQ_UNIFIED = 106 257252661Snp# NETHCTRL_UNIFIED = 32 258252661Snp# NEQ_UNIFIED = 124 259252661Snp# NMPSTCAM_UNIFIED = 40 260252661Snp# 261252661Snp# The sum of all the MSI-X resources above is 74 MSI-X Vectors but we'll round 262252661Snp# that up to 128 to make sure the Unified PF doesn't run out of resources. 263252661Snp# 264252661Snp# NMSIX_UNIFIED = 128 265252661Snp# 266252661Snp# The Storage PFs could need up to NPORTS*NCPUS + NMSIX_EXTRA MSI-X Vectors 267252661Snp# which is 34 but they're probably safe with 32. 268252661Snp# 269252661Snp# NMSIX_STORAGE = 32 270252661Snp 271252661Snp# Note: The UnifiedPF is PF4 which doesn't have any Virtual Functions 272252661Snp# associated with it. Thus, the MSI-X Vector allocations we give to the 273252661Snp# UnifiedPF aren't inherited by any Virtual Functions. As a result we can 274252661Snp# provision many more Virtual Functions than we can if the UnifiedPF were 275252661Snp# one of PF0-3. 276252661Snp# 277252661Snp 278252661Snp# All of the below PCI-E parameters are actually stored in various *_init.txt 279252661Snp# files. We include them below essentially as comments. 280252661Snp# 281252661Snp# For PF0-3 we assign 8 vectors each for NIC Ingress Queues of the associated 282252661Snp# ports 0-3. 283252661Snp# 284252661Snp# For PF4, the Unified PF, we give it an MSI-X Table Size as outlined above. 285252661Snp# 286252661Snp# For PF5-6 we assign enough MSI-X Vectors to support FCoE and iSCSI 287252661Snp# storage applications across all four possible ports. 288252661Snp# 289252661Snp# Additionally, since the UnifiedPF isn't one of the per-port Physical 290252661Snp# Functions, we give the UnifiedPF and the PF0-3 Physical Functions 291252661Snp# different PCI Device IDs which will allow Unified and Per-Port Drivers 292252661Snp# to directly select the type of Physical Function to which they wish to be 293252661Snp# attached. 294252661Snp# 295298976Spfg# Note that the actual values used for the PCI-E Intelectual Property will be 296252661Snp# 1 less than those below since that's the way it "counts" things. For 297252661Snp# readability, we use the number we actually mean ... 298252661Snp# 299252661Snp# PF0_INT = 8 # NCPUS 300252661Snp# PF1_INT = 8 # NCPUS 301252661Snp# PF2_INT = 8 # NCPUS 302252661Snp# PF3_INT = 8 # NCPUS 303252661Snp# PF0_3_INT = 32 # PF0_INT + PF1_INT + PF2_INT + PF3_INT 304252661Snp# 305252661Snp# PF4_INT = 128 # NMSIX_UNIFIED 306252661Snp# PF5_INT = 32 # NMSIX_STORAGE 307252661Snp# PF6_INT = 32 # NMSIX_STORAGE 308252661Snp# PF7_INT = 0 # Nothing Assigned 309252661Snp# PF4_7_INT = 192 # PF4_INT + PF5_INT + PF6_INT + PF7_INT 310252661Snp# 311252661Snp# PF0_7_INT = 224 # PF0_3_INT + PF4_7_INT 312252661Snp# 313252661Snp# With the above we can get 17 VFs/PF0-3 (limited by 336 MPS TCAM entries) 314252661Snp# but we'll lower that to 16 to make our total 64 and a nice power of 2 ... 315252661Snp# 316252661Snp# NVF = 16 317252661Snp 318252661Snp# For those OSes which manage different ports on different PFs, we need 319252661Snp# only enough resources to support a single port's NIC application functions 320252661Snp# on PF0-3. The below assumes that we're only doing NIC with NCPUS "Queue 321252661Snp# Sets" for ports 0-3. The FCoE and iSCSI functions for such OSes will be 322252661Snp# managed on the "storage PFs" (see below). 323252661Snp# 324252661Snp 325252661Snp# Some OS Drivers manage all application functions for all ports via PF4. 326252661Snp# Thus we need to provide a large number of resources here. For Egress 327252661Snp# Queues we need to account for both TX Queues as well as Free List Queues 328252661Snp# (because the host is responsible for producing Free List Buffers for the 329252661Snp# hardware to consume). 330252661Snp# 331252661Snp[function "0"] 332252661Snp wx_caps = all # write/execute permissions for all commands 333252661Snp r_caps = all # read permissions for all commands 334252661Snp nvi = 28 # NVI_UNIFIED 335252661Snp niqflint = 170 # NFLIQ_UNIFIED + NLFIQ_WD 336252661Snp nethctrl = 96 # NETHCTRL_UNIFIED + NETHCTRL_WD 337252661Snp neq = 252 # NEQ_UNIFIED + NEQ_WD 338252661Snp nexactf = 40 # NMPSTCAM_UNIFIED 339252661Snp cmask = all # access to all channels 340252661Snp pmask = all # access to all four ports ... 341252661Snp nroute = 32 # number of routing region entries 342252661Snp nclip = 32 # number of clip region entries 343252661Snp nfilter = 48 # number of filter region entries 344252661Snp nserver = 32 # number of server region entries 345252661Snp nhash = 2048 # number of hash region entries 346252661Snp protocol = nic_vm, ofld, rddp, rdmac, iscsi_initiator_pdu, iscsi_target_pdu 347252661Snp tp_l2t = 3072 348252661Snp tp_ddp = 2 349252661Snp tp_ddp_iscsi = 2 350252661Snp tp_stag = 2 351252661Snp tp_pbl = 5 352252661Snp tp_rq = 7 353252661Snp 354252661Snp# We have FCoE and iSCSI storage functions on PF5 and PF6 each of which may 355252661Snp# need to have Virtual Interfaces on each of the four ports with up to NCPUS 356252661Snp# "Queue Sets" each. 357252661Snp# 358252661Snp[function "1"] 359252661Snp wx_caps = all # write/execute permissions for all commands 360252661Snp r_caps = all # read permissions for all commands 361252661Snp nvi = 4 # NPORTS 362252661Snp niqflint = 34 # NPORTS*NCPUS + NMSIX_EXTRA 363252661Snp nethctrl = 32 # NPORTS*NCPUS 364252661Snp neq = 66 # NPORTS*NCPUS * 2 (FL, ETHCTRL/TX) + 2 (EXTRA) 365252661Snp nexactf = 32 # NPORTS + adding 28 exact entries for FCoE 366252661Snp # which is OK since < MIN(SUM PF0..3, PF4) 367252661Snp # and we never load PF0..3 and PF4 concurrently 368252661Snp cmask = all # access to all channels 369252661Snp pmask = all # access to all four ports ... 370252661Snp nhash = 2048 371252661Snp protocol = fcoe_initiator 372252661Snp tp_ddp = 2 373252661Snp fcoe_nfcf = 16 374252661Snp fcoe_nvnp = 32 375252661Snp fcoe_nssn = 1024 376252661Snp 377252661Snp# The following function, 1023, is not an actual PCIE function but is used to 378252661Snp# configure and reserve firmware internal resources that come from the global 379252661Snp# resource pool. 380252661Snp# 381252661Snp[function "1023"] 382252661Snp wx_caps = all # write/execute permissions for all commands 383252661Snp r_caps = all # read permissions for all commands 384252661Snp nvi = 4 # NVI_UNIFIED 385252661Snp cmask = all # access to all channels 386252661Snp pmask = all # access to all four ports ... 387252661Snp nexactf = 8 # NPORTS + DCBX + 388252661Snp nfilter = 16 # number of filter region entries 389252661Snp 390252661Snp# For Virtual functions, we only allow NIC functionality and we only allow 391252661Snp# access to one port (1 << PF). Note that because of limitations in the 392252661Snp# Scatter Gather Engine (SGE) hardware which checks writes to VF KDOORBELL 393252661Snp# and GTS registers, the number of Ingress and Egress Queues must be a power 394252661Snp# of 2. 395252661Snp# 396252661Snp[function "0/*"] # NVF 397252661Snp wx_caps = 0x82 # DMAQ | VF 398252661Snp r_caps = 0x86 # DMAQ | VF | PORT 399252661Snp nvi = 1 # 1 port 400252661Snp niqflint = 4 # 2 "Queue Sets" + NXIQ 401252661Snp nethctrl = 2 # 2 "Queue Sets" 402252661Snp neq = 4 # 2 "Queue Sets" * 2 403252661Snp nexactf = 4 404252661Snp cmask = all # access to all channels 405252661Snp pmask = 0x1 # access to only one port ... 406252661Snp 407252661Snp[function "1/*"] # NVF 408252661Snp wx_caps = 0x82 # DMAQ | VF 409252661Snp r_caps = 0x86 # DMAQ | VF | PORT 410252661Snp nvi = 1 # 1 port 411252661Snp niqflint = 4 # 2 "Queue Sets" + NXIQ 412252661Snp nethctrl = 2 # 2 "Queue Sets" 413252661Snp neq = 4 # 2 "Queue Sets" * 2 414252661Snp nexactf = 4 415252661Snp cmask = all # access to all channels 416252661Snp pmask = 0x2 # access to only one port ... 417252661Snp 418252661Snp# MPS features a 196608 bytes ingress buffer that is used for ingress buffering 419252661Snp# for packets from the wire as well as the loopback path of the L2 switch. The 420252661Snp# folling params control how the buffer memory is distributed and the L2 flow 421252661Snp# control settings: 422252661Snp# 423252661Snp# bg_mem: %-age of mem to use for port/buffer group 424252661Snp# lpbk_mem: %-age of port/bg mem to use for loopback 425252661Snp# hwm: high watermark; bytes available when starting to send pause 426252661Snp# frames (in units of 0.1 MTU) 427252661Snp# lwm: low watermark; bytes remaining when sending 'unpause' frame 428252661Snp# (in inuits of 0.1 MTU) 429252661Snp# dwm: minimum delta between high and low watermark (in units of 100 430252661Snp# Bytes) 431252661Snp# 432252661Snp[port "0"] 433252661Snp dcb = ppp, dcbx # configure for DCB PPP and enable DCBX offload 434252661Snp bg_mem = 25 435252661Snp lpbk_mem = 25 436252661Snp hwm = 30 437252661Snp lwm = 15 438252661Snp dwm = 30 439252661Snp 440252661Snp[port "1"] 441252661Snp dcb = ppp, dcbx 442252661Snp bg_mem = 25 443252661Snp lpbk_mem = 25 444252661Snp hwm = 30 445252661Snp lwm = 15 446252661Snp dwm = 30 447252661Snp 448252661Snp[port "2"] 449252661Snp dcb = ppp, dcbx 450252661Snp bg_mem = 25 451252661Snp lpbk_mem = 25 452252661Snp hwm = 30 453252661Snp lwm = 15 454252661Snp dwm = 30 455252661Snp 456252661Snp[port "3"] 457252661Snp dcb = ppp, dcbx 458252661Snp bg_mem = 25 459252661Snp lpbk_mem = 25 460252661Snp hwm = 30 461252661Snp lwm = 15 462252661Snp dwm = 30 463252661Snp 464252661Snp[fini] 465252661Snp version = 0x1425000d 466274351Snp checksum = 0x22f1530b 467252661Snp 468252661Snp# Total resources used by above allocations: 469252661Snp# Virtual Interfaces: 104 470252661Snp# Ingress Queues/w Free Lists and Interrupts: 526 471252661Snp# Egress Queues: 702 472252661Snp# MPS TCAM Entries: 336 473252661Snp# MSI-X Vectors: 736 474252661Snp# Virtual Functions: 64 475252661Snp# 476252661Snp# $FreeBSD: head/sys/dev/cxgbe/firmware/t5fw_cfg_fpga.txt 298976 2016-05-03 11:49:29Z pfg $ 477252661Snp# 478