t5fw_cfg_uwire.txt revision 252661
1252661Snp# Chelsio T5 Factory Default configuration file. 2252661Snp# 3252661Snp# Copyright (C) 2010-2013 Chelsio Communications. All rights reserved. 4252661Snp# 5252661Snp# DO NOT MODIFY THIS FILE UNDER ANY CIRCUMSTANCES. MODIFICATION OF 6252661Snp# THIS FILE WILL RESULT IN A NON-FUNCTIONAL T4 ADAPTER AND MAY RESULT 7252661Snp# IN PHYSICAL DAMAGE TO T4 ADAPTERS. 8252661Snp 9252661Snp# This file provides the default, power-on configuration for 4-port T4-based 10252661Snp# adapters shipped from the factory. These defaults are designed to address 11252661Snp# the needs of the vast majority of T4 customers. The basic idea is to have 12252661Snp# a default configuration which allows a customer to plug a T4 adapter in and 13252661Snp# have it work regardless of OS, driver or application except in the most 14252661Snp# unusual and/or demanding customer applications. 15252661Snp# 16252661Snp# Many of the T4 resources which are described by this configuration are 17252661Snp# finite. This requires balancing the configuration/operation needs of 18252661Snp# device drivers across OSes and a large number of customer application. 19252661Snp# 20252661Snp# Some of the more important resources to allocate and their constaints are: 21252661Snp# 1. Virtual Interfaces: 128. 22252661Snp# 2. Ingress Queues with Free Lists: 1024. PCI-E SR-IOV Virtual Functions 23252661Snp# must use a power of 2 Ingress Queues. 24252661Snp# 3. Egress Queues: 128K. PCI-E SR-IOV Virtual Functions must use a 25252661Snp# power of 2 Egress Queues. 26252661Snp# 4. MSI-X Vectors: 1088. A complication here is that the PCI-E SR-IOV 27252661Snp# Virtual Functions based off of a Physical Function all get the 28252661Snp# same umber of MSI-X Vectors as the base Physical Function. 29252661Snp# Additionally, regardless of whether Virtual Functions are enabled or 30252661Snp# not, their MSI-X "needs" are counted by the PCI-E implementation. 31252661Snp# And finally, all Physical Funcations capable of supporting Virtual 32252661Snp# Functions (PF0-3) must have the same number of configured TotalVFs in 33252661Snp# their SR-IOV Capabilities. 34252661Snp# 5. Multi-Port Support (MPS) TCAM: 336 entries to support MAC destination 35252661Snp# address matching on Ingress Packets. 36252661Snp# 37252661Snp# Some of the important OS/Driver resource needs are: 38252661Snp# 6. Some OS Drivers will manage all resources through a single Physical 39252661Snp# Function (currently PF0 but it could be any Physical Function). Thus, 40252661Snp# this "Unified PF" will need to have enough resources allocated to it 41252661Snp# to allow for this. And because of the MSI-X resource allocation 42252661Snp# constraints mentioned above, this probably means we'll either have to 43252661Snp# severely limit the TotalVFs if we continue to use PF0 as the Unified PF 44252661Snp# or we'll need to move the Unified PF into the PF4-7 range since those 45252661Snp# Physical Functions don't have any Virtual Functions associated with 46252661Snp# them. 47252661Snp# 7. Some OS Drivers will manage different ports and functions (NIC, 48252661Snp# storage, etc.) on different Physical Functions. For example, NIC 49252661Snp# functions for ports 0-3 on PF0-3, FCoE on PF4, iSCSI on PF5, etc. 50252661Snp# 51252661Snp# Some of the customer application needs which need to be accommodated: 52252661Snp# 8. Some customers will want to support large CPU count systems with 53252661Snp# good scaling. Thus, we'll need to accommodate a number of 54252661Snp# Ingress Queues and MSI-X Vectors to allow up to some number of CPUs 55252661Snp# to be involved per port and per application function. For example, 56252661Snp# in the case where all ports and application functions will be 57252661Snp# managed via a single Unified PF and we want to accommodate scaling up 58252661Snp# to 8 CPUs, we would want: 59252661Snp# 60252661Snp# 4 ports * 61252661Snp# 3 application functions (NIC, FCoE, iSCSI) per port * 62252661Snp# 8 Ingress Queue/MSI-X Vectors per application function 63252661Snp# 64252661Snp# for a total of 96 Ingress Queues and MSI-X Vectors on the Unified PF. 65252661Snp# (Plus a few for Firmware Event Queues, etc.) 66252661Snp# 67252661Snp# 9. Some customers will want to use T4's PCI-E SR-IOV Capability to allow 68252661Snp# Virtual Machines to directly access T4 functionality via SR-IOV 69252661Snp# Virtual Functions and "PCI Device Passthrough" -- this is especially 70252661Snp# true for the NIC application functionality. (Note that there is 71252661Snp# currently no ability to use the TOE, FCoE, iSCSI, etc. via Virtual 72252661Snp# Functions so this is in fact solely limited to NIC.) 73252661Snp# 74252661Snp 75252661Snp 76252661Snp# Global configuration settings. 77252661Snp# 78252661Snp[global] 79252661Snp rss_glb_config_mode = basicvirtual 80252661Snp rss_glb_config_options = tnlmapen,hashtoeplitz,tnlalllkp 81252661Snp 82252661Snp # PL_TIMEOUT register 83252661Snp pl_timeout_value = 200 # the timeout value in units of us 84252661Snp 85252661Snp # The following Scatter Gather Engine (SGE) settings assume a 4KB Host 86252661Snp # Page Size and a 64B L1 Cache Line Size. It programs the 87252661Snp # EgrStatusPageSize and IngPadBoundary to 64B and the PktShift to 2. 88252661Snp # If a Master PF Driver finds itself on a machine with different 89252661Snp # parameters, then the Master PF Driver is responsible for initializing 90252661Snp # these parameters to appropriate values. 91252661Snp # 92252661Snp # Notes: 93252661Snp # 1. The Free List Buffer Sizes below are raw and the firmware will 94252661Snp # round them up to the Ingress Padding Boundary. 95252661Snp # 2. The SGE Timer Values below are expressed below in microseconds. 96252661Snp # The firmware will convert these values to Core Clock Ticks when 97252661Snp # it processes the configuration parameters. 98252661Snp # 99252661Snp reg[0x1008] = 0x40810/0x21c70 # SGE_CONTROL 100252661Snp reg[0x100c] = 0x22222222 # SGE_HOST_PAGE_SIZE 101252661Snp reg[0x10a0] = 0x01040810 # SGE_INGRESS_RX_THRESHOLD 102252661Snp reg[0x1044] = 4096 # SGE_FL_BUFFER_SIZE0 103252661Snp reg[0x1048] = 65536 # SGE_FL_BUFFER_SIZE1 104252661Snp reg[0x104c] = 1536 # SGE_FL_BUFFER_SIZE2 105252661Snp reg[0x1050] = 9024 # SGE_FL_BUFFER_SIZE3 106252661Snp reg[0x1054] = 9216 # SGE_FL_BUFFER_SIZE4 107252661Snp reg[0x1058] = 2048 # SGE_FL_BUFFER_SIZE5 108252661Snp reg[0x105c] = 128 # SGE_FL_BUFFER_SIZE6 109252661Snp reg[0x1060] = 8192 # SGE_FL_BUFFER_SIZE7 110252661Snp reg[0x1064] = 16384 # SGE_FL_BUFFER_SIZE8 111252661Snp reg[0x10a4] = 0xa000a000/0xf000f000 # SGE_DBFIFO_STATUS 112252661Snp reg[0x10a8] = 0x402000/0x402000 # SGE_DOORBELL_CONTROL 113252661Snp 114252661Snp # SGE_THROTTLE_CONTROL 115252661Snp bar2throttlecount = 500 # bar2throttlecount in us 116252661Snp 117252661Snp sge_timer_value = 5, 10, 20, 50, 100, 200 # SGE_TIMER_VALUE* in usecs 118252661Snp 119252661Snp 120252661Snp reg[0x1124] = 0x00000400/0x00000400 # SGE_CONTROL2, enable VFIFO; if 121252661Snp # SGE_VFIFO_SIZE is not set, then 122252661Snp # firmware will set it up in function 123252661Snp # of number of egress queues used 124252661Snp 125252661Snp reg[0x1130] = 0x00d5ffeb # SGE_DBP_FETCH_THRESHOLD, fetch 126252661Snp # threshold set to queue depth 127252661Snp # minus 128-entries for FL and HP 128252661Snp # queues, and 0xfff for LP which 129252661Snp # prompts the firmware to set it up 130252661Snp # in function of egress queues 131252661Snp # used 132252661Snp 133252661Snp reg[0x113c] = 0x0002ffc0 # SGE_VFIFO_SIZE, set to 0x2ffc0 which 134252661Snp # prompts the firmware to set it up in 135252661Snp # function of number of egress queues 136252661Snp # used 137252661Snp 138252661Snp reg[0x7dc0] = 0x062f8849 # TP_SHIFT_CNT 139252661Snp 140252661Snp # Selection of tuples for LE filter lookup, fields (and widths which 141252661Snp # must sum to <= 36): { IP Fragment (1), MPS Match Type (3), 142252661Snp # IP Protocol (8), [Inner] VLAN (17), Port (3), FCoE (1) } 143252661Snp # 144252661Snp filterMode = srvrsram, fragmentation, mpshittype, protocol, vlan, port, fcoe 145252661Snp filterMask = protocol, fcoe 146252661Snp 147252661Snp # Percentage of dynamic memory (in either the EDRAM or external MEM) 148252661Snp # to use for TP RX payload 149252661Snp tp_pmrx = 30 150252661Snp 151252661Snp # TP RX payload page size 152252661Snp tp_pmrx_pagesize = 64K 153252661Snp 154252661Snp # TP number of RX channels 155252661Snp tp_nrxch = 0 # 0 (auto) = 1 156252661Snp 157252661Snp # Percentage of dynamic memory (in either the EDRAM or external MEM) 158252661Snp # to use for TP TX payload 159252661Snp tp_pmtx = 50 160252661Snp 161252661Snp # TP TX payload page size 162252661Snp tp_pmtx_pagesize = 64K 163252661Snp 164252661Snp # TP number of TX channels 165252661Snp tp_ntxch = 0 # 0 (auto) = equal number of ports 166252661Snp 167252661Snp # TP_GLOBAL_CONFIG 168252661Snp reg[0x7d08] = 0x00000800/0x00000800 # set IssFromCplEnable 169252661Snp 170252661Snp # LE_DB_CONFIG 171252661Snp reg[0x19c04] = 0x00400000/0x00400000 # LE Server SRAM Enable 172252661Snp 173252661Snp# Some "definitions" to make the rest of this a bit more readable. We support 174252661Snp# 4 ports, 3 functions (NIC, FCoE and iSCSI), scaling up to 8 "CPU Queue Sets" 175252661Snp# per function per port ... 176252661Snp# 177252661Snp# NMSIX = 1088 # available MSI-X Vectors 178252661Snp# NVI = 128 # available Virtual Interfaces 179252661Snp# NMPSTCAM = 336 # MPS TCAM entries 180252661Snp# 181252661Snp# NPORTS = 4 # ports 182252661Snp# NCPUS = 8 # CPUs we want to support scalably 183252661Snp# NFUNCS = 3 # functions per port (NIC, FCoE, iSCSI) 184252661Snp 185252661Snp# Breakdown of Virtual Interface/Queue/Interrupt resources for the "Unified 186252661Snp# PF" which many OS Drivers will use to manage most or all functions. 187252661Snp# 188252661Snp# Each Ingress Queue can use one MSI-X interrupt but some Ingress Queues can 189252661Snp# use Forwarded Interrupt Ingress Queues. For these latter, an Ingress Queue 190252661Snp# would be created and the Queue ID of a Forwarded Interrupt Ingress Queue 191252661Snp# will be specified as the "Ingress Queue Asynchronous Destination Index." 192252661Snp# Thus, the number of MSI-X Vectors assigned to the Unified PF will be less 193252661Snp# than or equal to the number of Ingress Queues ... 194252661Snp# 195252661Snp# NVI_NIC = 4 # NIC access to NPORTS 196252661Snp# NFLIQ_NIC = 32 # NIC Ingress Queues with Free Lists 197252661Snp# NETHCTRL_NIC = 32 # NIC Ethernet Control/TX Queues 198252661Snp# NEQ_NIC = 64 # NIC Egress Queues (FL, ETHCTRL/TX) 199252661Snp# NMPSTCAM_NIC = 16 # NIC MPS TCAM Entries (NPORTS*4) 200252661Snp# NMSIX_NIC = 32 # NIC MSI-X Interrupt Vectors (FLIQ) 201252661Snp# 202252661Snp# NVI_OFLD = 0 # Offload uses NIC function to access ports 203252661Snp# NFLIQ_OFLD = 16 # Offload Ingress Queues with Free Lists 204252661Snp# NETHCTRL_OFLD = 0 # Offload Ethernet Control/TX Queues 205252661Snp# NEQ_OFLD = 16 # Offload Egress Queues (FL) 206252661Snp# NMPSTCAM_OFLD = 0 # Offload MPS TCAM Entries (uses NIC's) 207252661Snp# NMSIX_OFLD = 16 # Offload MSI-X Interrupt Vectors (FLIQ) 208252661Snp# 209252661Snp# NVI_RDMA = 0 # RDMA uses NIC function to access ports 210252661Snp# NFLIQ_RDMA = 4 # RDMA Ingress Queues with Free Lists 211252661Snp# NETHCTRL_RDMA = 0 # RDMA Ethernet Control/TX Queues 212252661Snp# NEQ_RDMA = 4 # RDMA Egress Queues (FL) 213252661Snp# NMPSTCAM_RDMA = 0 # RDMA MPS TCAM Entries (uses NIC's) 214252661Snp# NMSIX_RDMA = 4 # RDMA MSI-X Interrupt Vectors (FLIQ) 215252661Snp# 216252661Snp# NEQ_WD = 128 # Wire Direct TX Queues and FLs 217252661Snp# NETHCTRL_WD = 64 # Wire Direct TX Queues 218252661Snp# NFLIQ_WD = 64 ` # Wire Direct Ingress Queues with Free Lists 219252661Snp# 220252661Snp# NVI_ISCSI = 4 # ISCSI access to NPORTS 221252661Snp# NFLIQ_ISCSI = 4 # ISCSI Ingress Queues with Free Lists 222252661Snp# NETHCTRL_ISCSI = 0 # ISCSI Ethernet Control/TX Queues 223252661Snp# NEQ_ISCSI = 4 # ISCSI Egress Queues (FL) 224252661Snp# NMPSTCAM_ISCSI = 4 # ISCSI MPS TCAM Entries (NPORTS) 225252661Snp# NMSIX_ISCSI = 4 # ISCSI MSI-X Interrupt Vectors (FLIQ) 226252661Snp# 227252661Snp# NVI_FCOE = 4 # FCOE access to NPORTS 228252661Snp# NFLIQ_FCOE = 34 # FCOE Ingress Queues with Free Lists 229252661Snp# NETHCTRL_FCOE = 32 # FCOE Ethernet Control/TX Queues 230252661Snp# NEQ_FCOE = 66 # FCOE Egress Queues (FL) 231252661Snp# NMPSTCAM_FCOE = 32 # FCOE MPS TCAM Entries (NPORTS) 232252661Snp# NMSIX_FCOE = 34 # FCOE MSI-X Interrupt Vectors (FLIQ) 233252661Snp 234252661Snp# Two extra Ingress Queues per function for Firmware Events and Forwarded 235252661Snp# Interrupts, and two extra interrupts per function for Firmware Events (or a 236252661Snp# Forwarded Interrupt Queue) and General Interrupts per function. 237252661Snp# 238252661Snp# NFLIQ_EXTRA = 6 # "extra" Ingress Queues 2*NFUNCS (Firmware and 239252661Snp# # Forwarded Interrupts 240252661Snp# NMSIX_EXTRA = 6 # extra interrupts 2*NFUNCS (Firmware and 241252661Snp# # General Interrupts 242252661Snp 243252661Snp# Microsoft HyperV resources. The HyperV Virtual Ingress Queues will have 244252661Snp# their interrupts forwarded to another set of Forwarded Interrupt Queues. 245252661Snp# 246252661Snp# NVI_HYPERV = 16 # VMs we want to support 247252661Snp# NVIIQ_HYPERV = 2 # Virtual Ingress Queues with Free Lists per VM 248252661Snp# NFLIQ_HYPERV = 40 # VIQs + NCPUS Forwarded Interrupt Queues 249252661Snp# NEQ_HYPERV = 32 # VIQs Free Lists 250252661Snp# NMPSTCAM_HYPERV = 16 # MPS TCAM Entries (NVI_HYPERV) 251252661Snp# NMSIX_HYPERV = 8 # NCPUS Forwarded Interrupt Queues 252252661Snp 253252661Snp# Adding all of the above Unified PF resource needs together: (NIC + OFLD + 254252661Snp# RDMA + ISCSI + FCOE + EXTRA + HYPERV) 255252661Snp# 256252661Snp# NVI_UNIFIED = 28 257252661Snp# NFLIQ_UNIFIED = 106 258252661Snp# NETHCTRL_UNIFIED = 32 259252661Snp# NEQ_UNIFIED = 124 260252661Snp# NMPSTCAM_UNIFIED = 40 261252661Snp# 262252661Snp# The sum of all the MSI-X resources above is 74 MSI-X Vectors but we'll round 263252661Snp# that up to 128 to make sure the Unified PF doesn't run out of resources. 264252661Snp# 265252661Snp# NMSIX_UNIFIED = 128 266252661Snp# 267252661Snp# The Storage PFs could need up to NPORTS*NCPUS + NMSIX_EXTRA MSI-X Vectors 268252661Snp# which is 34 but they're probably safe with 32. 269252661Snp# 270252661Snp# NMSIX_STORAGE = 32 271252661Snp 272252661Snp# Note: The UnifiedPF is PF4 which doesn't have any Virtual Functions 273252661Snp# associated with it. Thus, the MSI-X Vector allocations we give to the 274252661Snp# UnifiedPF aren't inherited by any Virtual Functions. As a result we can 275252661Snp# provision many more Virtual Functions than we can if the UnifiedPF were 276252661Snp# one of PF0-3. 277252661Snp# 278252661Snp 279252661Snp# All of the below PCI-E parameters are actually stored in various *_init.txt 280252661Snp# files. We include them below essentially as comments. 281252661Snp# 282252661Snp# For PF0-3 we assign 8 vectors each for NIC Ingress Queues of the associated 283252661Snp# ports 0-3. 284252661Snp# 285252661Snp# For PF4, the Unified PF, we give it an MSI-X Table Size as outlined above. 286252661Snp# 287252661Snp# For PF5-6 we assign enough MSI-X Vectors to support FCoE and iSCSI 288252661Snp# storage applications across all four possible ports. 289252661Snp# 290252661Snp# Additionally, since the UnifiedPF isn't one of the per-port Physical 291252661Snp# Functions, we give the UnifiedPF and the PF0-3 Physical Functions 292252661Snp# different PCI Device IDs which will allow Unified and Per-Port Drivers 293252661Snp# to directly select the type of Physical Function to which they wish to be 294252661Snp# attached. 295252661Snp# 296252661Snp# Note that the actual values used for the PCI-E Intelectual Property will be 297252661Snp# 1 less than those below since that's the way it "counts" things. For 298252661Snp# readability, we use the number we actually mean ... 299252661Snp# 300252661Snp# PF0_INT = 8 # NCPUS 301252661Snp# PF1_INT = 8 # NCPUS 302252661Snp# PF2_INT = 8 # NCPUS 303252661Snp# PF3_INT = 8 # NCPUS 304252661Snp# PF0_3_INT = 32 # PF0_INT + PF1_INT + PF2_INT + PF3_INT 305252661Snp# 306252661Snp# PF4_INT = 128 # NMSIX_UNIFIED 307252661Snp# PF5_INT = 32 # NMSIX_STORAGE 308252661Snp# PF6_INT = 32 # NMSIX_STORAGE 309252661Snp# PF7_INT = 0 # Nothing Assigned 310252661Snp# PF4_7_INT = 192 # PF4_INT + PF5_INT + PF6_INT + PF7_INT 311252661Snp# 312252661Snp# PF0_7_INT = 224 # PF0_3_INT + PF4_7_INT 313252661Snp# 314252661Snp# With the above we can get 17 VFs/PF0-3 (limited by 336 MPS TCAM entries) 315252661Snp# but we'll lower that to 16 to make our total 64 and a nice power of 2 ... 316252661Snp# 317252661Snp# NVF = 16 318252661Snp 319252661Snp# For those OSes which manage different ports on different PFs, we need 320252661Snp# only enough resources to support a single port's NIC application functions 321252661Snp# on PF0-3. The below assumes that we're only doing NIC with NCPUS "Queue 322252661Snp# Sets" for ports 0-3. The FCoE and iSCSI functions for such OSes will be 323252661Snp# managed on the "storage PFs" (see below). 324252661Snp# 325252661Snp[function "0"] 326252661Snp nvf = 16 # NVF on this function 327252661Snp wx_caps = all # write/execute permissions for all commands 328252661Snp r_caps = all # read permissions for all commands 329252661Snp nvi = 1 # 1 port 330252661Snp niqflint = 8 # NCPUS "Queue Sets" 331252661Snp nethctrl = 8 # NCPUS "Queue Sets" 332252661Snp neq = 16 # niqflint + nethctrl Egress Queues 333252661Snp nexactf = 8 # number of exact MPSTCAM MAC filters 334252661Snp cmask = all # access to all channels 335252661Snp pmask = 0x1 # access to only one port 336252661Snp 337252661Snp[function "1"] 338252661Snp nvf = 16 # NVF on this function 339252661Snp wx_caps = all # write/execute permissions for all commands 340252661Snp r_caps = all # read permissions for all commands 341252661Snp nvi = 1 # 1 port 342252661Snp niqflint = 8 # NCPUS "Queue Sets" 343252661Snp nethctrl = 8 # NCPUS "Queue Sets" 344252661Snp neq = 16 # niqflint + nethctrl Egress Queues 345252661Snp nexactf = 8 # number of exact MPSTCAM MAC filters 346252661Snp cmask = all # access to all channels 347252661Snp pmask = 0x2 # access to only one port 348252661Snp 349252661Snp[function "2"] 350252661Snp nvf = 16 # NVF on this function 351252661Snp wx_caps = all # write/execute permissions for all commands 352252661Snp r_caps = all # read permissions for all commands 353252661Snp nvi = 1 # 1 port 354252661Snp niqflint = 8 # NCPUS "Queue Sets" 355252661Snp nethctrl = 8 # NCPUS "Queue Sets" 356252661Snp neq = 16 # niqflint + nethctrl Egress Queues 357252661Snp nexactf = 8 # number of exact MPSTCAM MAC filters 358252661Snp cmask = all # access to all channels 359252661Snp pmask = 0x4 # access to only one port 360252661Snp 361252661Snp[function "3"] 362252661Snp nvf = 16 # NVF on this function 363252661Snp wx_caps = all # write/execute permissions for all commands 364252661Snp r_caps = all # read permissions for all commands 365252661Snp nvi = 1 # 1 port 366252661Snp niqflint = 8 # NCPUS "Queue Sets" 367252661Snp nethctrl = 8 # NCPUS "Queue Sets" 368252661Snp neq = 16 # niqflint + nethctrl Egress Queues 369252661Snp nexactf = 8 # number of exact MPSTCAM MAC filters 370252661Snp cmask = all # access to all channels 371252661Snp pmask = 0x8 # access to only one port 372252661Snp 373252661Snp# Some OS Drivers manage all application functions for all ports via PF4. 374252661Snp# Thus we need to provide a large number of resources here. For Egress 375252661Snp# Queues we need to account for both TX Queues as well as Free List Queues 376252661Snp# (because the host is responsible for producing Free List Buffers for the 377252661Snp# hardware to consume). 378252661Snp# 379252661Snp[function "4"] 380252661Snp wx_caps = all # write/execute permissions for all commands 381252661Snp r_caps = all # read permissions for all commands 382252661Snp nvi = 28 # NVI_UNIFIED 383252661Snp niqflint = 170 # NFLIQ_UNIFIED + NLFIQ_WD 384252661Snp nethctrl = 100 # NETHCTRL_UNIFIED + NETHCTRL_WD 385252661Snp neq = 256 # NEQ_UNIFIED + NEQ_WD 386252661Snp nexactf = 40 # NMPSTCAM_UNIFIED 387252661Snp cmask = all # access to all channels 388252661Snp pmask = all # access to all four ports ... 389252661Snp nethofld = 1024 # number of user mode ethernet flow contexts 390252661Snp nroute = 32 # number of routing region entries 391252661Snp nclip = 32 # number of clip region entries 392252661Snp nfilter = 496 # number of filter region entries 393252661Snp nserver = 496 # number of server region entries 394252661Snp nhash = 12288 # number of hash region entries 395252661Snp protocol = nic_vm, ofld, rddp, rdmac, iscsi_initiator_pdu, iscsi_target_pdu 396252661Snp tp_l2t = 3072 397252661Snp tp_ddp = 2 398252661Snp tp_ddp_iscsi = 2 399252661Snp tp_stag = 2 400252661Snp tp_pbl = 5 401252661Snp tp_rq = 7 402252661Snp 403252661Snp# We have FCoE and iSCSI storage functions on PF5 and PF6 each of which may 404252661Snp# need to have Virtual Interfaces on each of the four ports with up to NCPUS 405252661Snp# "Queue Sets" each. 406252661Snp# 407252661Snp[function "5"] 408252661Snp wx_caps = all # write/execute permissions for all commands 409252661Snp r_caps = all # read permissions for all commands 410252661Snp nvi = 4 # NPORTS 411252661Snp niqflint = 34 # NPORTS*NCPUS + NMSIX_EXTRA 412252661Snp nethctrl = 32 # NPORTS*NCPUS 413252661Snp neq = 64 # NPORTS*NCPUS * 2 (FL, ETHCTRL/TX) 414252661Snp nexactf = 4 # NPORTS 415252661Snp cmask = all # access to all channels 416252661Snp pmask = all # access to all four ports ... 417252661Snp nserver = 16 418252661Snp nhash = 2048 419252661Snp tp_l2t = 1024 420252661Snp protocol = iscsi_initiator_fofld 421252661Snp tp_ddp_iscsi = 2 422252661Snp iscsi_ntask = 2048 423252661Snp iscsi_nsess = 2048 424252661Snp iscsi_nconn_per_session = 1 425252661Snp iscsi_ninitiator_instance = 64 426252661Snp 427252661Snp[function "6"] 428252661Snp wx_caps = all # write/execute permissions for all commands 429252661Snp r_caps = all # read permissions for all commands 430252661Snp nvi = 4 # NPORTS 431252661Snp niqflint = 34 # NPORTS*NCPUS + NMSIX_EXTRA 432252661Snp nethctrl = 32 # NPORTS*NCPUS 433252661Snp neq = 66 # NPORTS*NCPUS * 2 (FL, ETHCTRL/TX) + 2 (EXTRA) 434252661Snp nexactf = 32 # NPORTS + adding 28 exact entries for FCoE 435252661Snp # which is OK since < MIN(SUM PF0..3, PF4) 436252661Snp # and we never load PF0..3 and PF4 concurrently 437252661Snp cmask = all # access to all channels 438252661Snp pmask = all # access to all four ports ... 439252661Snp nhash = 2048 440252661Snp protocol = fcoe_initiator 441252661Snp tp_ddp = 2 442252661Snp fcoe_nfcf = 16 443252661Snp fcoe_nvnp = 32 444252661Snp fcoe_nssn = 1024 445252661Snp 446252661Snp# The following function, 1023, is not an actual PCIE function but is used to 447252661Snp# configure and reserve firmware internal resources that come from the global 448252661Snp# resource pool. 449252661Snp# 450252661Snp[function "1023"] 451252661Snp wx_caps = all # write/execute permissions for all commands 452252661Snp r_caps = all # read permissions for all commands 453252661Snp nvi = 4 # NVI_UNIFIED 454252661Snp cmask = all # access to all channels 455252661Snp pmask = all # access to all four ports ... 456252661Snp nexactf = 8 # NPORTS + DCBX + 457252661Snp nfilter = 16 # number of filter region entries 458252661Snp 459252661Snp# For Virtual functions, we only allow NIC functionality and we only allow 460252661Snp# access to one port (1 << PF). Note that because of limitations in the 461252661Snp# Scatter Gather Engine (SGE) hardware which checks writes to VF KDOORBELL 462252661Snp# and GTS registers, the number of Ingress and Egress Queues must be a power 463252661Snp# of 2. 464252661Snp# 465252661Snp[function "0/*"] # NVF 466252661Snp wx_caps = 0x82 # DMAQ | VF 467252661Snp r_caps = 0x86 # DMAQ | VF | PORT 468252661Snp nvi = 1 # 1 port 469252661Snp niqflint = 4 # 2 "Queue Sets" + NXIQ 470252661Snp nethctrl = 2 # 2 "Queue Sets" 471252661Snp neq = 4 # 2 "Queue Sets" * 2 472252661Snp nexactf = 4 473252661Snp cmask = all # access to all channels 474252661Snp pmask = 0x1 # access to only one port ... 475252661Snp 476252661Snp[function "1/*"] # NVF 477252661Snp wx_caps = 0x82 # DMAQ | VF 478252661Snp r_caps = 0x86 # DMAQ | VF | PORT 479252661Snp nvi = 1 # 1 port 480252661Snp niqflint = 4 # 2 "Queue Sets" + NXIQ 481252661Snp nethctrl = 2 # 2 "Queue Sets" 482252661Snp neq = 4 # 2 "Queue Sets" * 2 483252661Snp nexactf = 4 484252661Snp cmask = all # access to all channels 485252661Snp pmask = 0x2 # access to only one port ... 486252661Snp 487252661Snp[function "2/*"] # NVF 488252661Snp wx_caps = 0x82 # DMAQ | VF 489252661Snp r_caps = 0x86 # DMAQ | VF | PORT 490252661Snp nvi = 1 # 1 port 491252661Snp niqflint = 4 # 2 "Queue Sets" + NXIQ 492252661Snp nethctrl = 2 # 2 "Queue Sets" 493252661Snp neq = 4 # 2 "Queue Sets" * 2 494252661Snp nexactf = 4 495252661Snp cmask = all # access to all channels 496252661Snp pmask = 0x4 # access to only one port ... 497252661Snp 498252661Snp[function "3/*"] # NVF 499252661Snp wx_caps = 0x82 # DMAQ | VF 500252661Snp r_caps = 0x86 # DMAQ | VF | PORT 501252661Snp nvi = 1 # 1 port 502252661Snp niqflint = 4 # 2 "Queue Sets" + NXIQ 503252661Snp nethctrl = 2 # 2 "Queue Sets" 504252661Snp neq = 4 # 2 "Queue Sets" * 2 505252661Snp nexactf = 4 506252661Snp cmask = all # access to all channels 507252661Snp pmask = 0x8 # access to only one port ... 508252661Snp 509252661Snp# MPS features a 196608 bytes ingress buffer that is used for ingress buffering 510252661Snp# for packets from the wire as well as the loopback path of the L2 switch. The 511252661Snp# folling params control how the buffer memory is distributed and the L2 flow 512252661Snp# control settings: 513252661Snp# 514252661Snp# bg_mem: %-age of mem to use for port/buffer group 515252661Snp# lpbk_mem: %-age of port/bg mem to use for loopback 516252661Snp# hwm: high watermark; bytes available when starting to send pause 517252661Snp# frames (in units of 0.1 MTU) 518252661Snp# lwm: low watermark; bytes remaining when sending 'unpause' frame 519252661Snp# (in inuits of 0.1 MTU) 520252661Snp# dwm: minimum delta between high and low watermark (in units of 100 521252661Snp# Bytes) 522252661Snp# 523252661Snp[port "0"] 524252661Snp dcb = ppp, dcbx # configure for DCB PPP and enable DCBX offload 525252661Snp bg_mem = 25 526252661Snp lpbk_mem = 25 527252661Snp hwm = 30 528252661Snp lwm = 15 529252661Snp dwm = 30 530252661Snp 531252661Snp[port "1"] 532252661Snp dcb = ppp, dcbx 533252661Snp bg_mem = 25 534252661Snp lpbk_mem = 25 535252661Snp hwm = 30 536252661Snp lwm = 15 537252661Snp dwm = 30 538252661Snp 539252661Snp[port "2"] 540252661Snp dcb = ppp, dcbx 541252661Snp bg_mem = 25 542252661Snp lpbk_mem = 25 543252661Snp hwm = 30 544252661Snp lwm = 15 545252661Snp dwm = 30 546252661Snp 547252661Snp[port "3"] 548252661Snp dcb = ppp, dcbx 549252661Snp bg_mem = 25 550252661Snp lpbk_mem = 25 551252661Snp hwm = 30 552252661Snp lwm = 15 553252661Snp dwm = 30 554252661Snp 555252661Snp[fini] 556252661Snp version = 0x1425000f 557252661Snp checksum = 0x23a2d850 558252661Snp 559252661Snp# Total resources used by above allocations: 560252661Snp# Virtual Interfaces: 104 561252661Snp# Ingress Queues/w Free Lists and Interrupts: 526 562252661Snp# Egress Queues: 702 563252661Snp# MPS TCAM Entries: 336 564252661Snp# MSI-X Vectors: 736 565252661Snp# Virtual Functions: 64 566252661Snp# 567252661Snp# $FreeBSD: head/sys/dev/cxgbe/firmware/t5fw_cfg_uwire.txt 252661 2013-07-03 23:52:15Z np $ 568252661Snp# 569