#
f3e6b3ae |
|
18-Feb-2024 |
Dan Williams <dan.j.williams@intel.com> |
acpi/ghes: Remove CXL CPER notifications Initial tests with the CXL CPER implementation identified that error reports were being duplicated in the log and the trace event [1]. Then it was discovered that the notification handler took sleeping locks while the GHES event handling runs in spin_lock_irqsave() context [2] While the duplicate reporting was fixed in v6.8-rc4, the fix for the sleeping-lock-vs-atomic collision would enjoy more time to settle and gain some test cycles. Given how late it is in the development cycle, remove the CXL hookup for now and try again during the next merge window. Note that end result is that v6.8 does not emit CXL CPER payloads to the kernel log, but this is in line with the CXL trend to move error reporting to trace events instead of the kernel log. Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Rafael J. Wysocki <rafael@kernel.org> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Link: http://lore.kernel.org/r/20240108165855.00002f5a@Huawei.com [1] Closes: http://lore.kernel.org/r/b963c490-2c13-4b79-bbe7-34c6568423c7@moroto.mountain [2] Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
d72a4caf |
|
17-Jan-2024 |
Ira Weiny <ira.weiny@intel.com> |
cxl/pci: Skip irq features if MSI/MSI-X are not supported CXL 3.1 Section 3.1.1 states: "A Function on a CXL device must not generate INTx messages if that Function participates in CXL.cache protocol or CXL.mem protocols." The generic CXL memory driver only supports devices which use the CXL.mem protocol. The current driver attempts to allocate MSI/MSI-X vectors in anticipation of their need for mailbox interrupts or event processing. However, the above requirement does not require a device to support interrupts, only that they use MSI/MSI-X. For example, a device may disable mailbox interrupts and either be configured for firmware first or skip event processing and function. Dave Larsen reported that the following Intel / Agilex card does not support interrupts on function 0. CXL: Intel Corporation Device 0ddb (rev 01) (prog-if 10 [CXL Memory Device (CXL 2.x)]) Rather than fail device probe if interrupts are not supported; flag that irqs are not enabled and avoid features which require interrupts. Emit messages appropriate for the situation to aid in debugging should device behavior be unexpected due to a failure to allocate vectors. Note that it is possible for a device to have host based event processing through polling. However, the driver does not support polling and it is not anticipated to be generally required. Leave that functionality to a future patch if such a device comes along. Reported-by: Dave Larsen <davelarsen58@gmail.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fan Ni <fan.ni@samsung.com> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Reviewed-and-tested-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/20240117-dont-fail-irq-v2-1-f33f26b0e365@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
dc97f634 |
|
20-Dec-2023 |
Ira Weiny <ira.weiny@intel.com> |
cxl/pci: Register for and process CPER events If the firmware has configured CXL event support to be firmware first the OS can process those events through CPER records. The CXL layer has unique DPA to HPA knowledge and standard event trace parsing in place. CPER records contain Bus, Device, Function information which can be used to identify the PCI device which is sending the event. Change the PCI driver registration to include registration of a CXL CPER callback to process events through the trace subsystem. Use new scoped based management to simplify the handling of the PCI device object. Tested-by: Smita-Koralahalli <Smita.KoralahalliChannabasappa@amd.com> Reviewed-by: Smita-Koralahalli <Smita.KoralahalliChannabasappa@amd.com> Link: https://lore.kernel.org/r/20231220-cxl-cper-v5-9-1bb8a4ca2c7a@intel.com Signed-off-by: Ira Weiny <ira.weiny@intel.com> [djbw: use new pci_dev guard, flip init order] Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
e8db0701 |
|
18-Oct-2023 |
Robert Richter <rrichter@amd.com> |
cxl/core/regs: Rework cxl_map_pmu_regs() to use map->dev for devm struct cxl_register_map carries a @dev parameter for devm operations. Simplify the function interface to use that instead of a separate @dev argument. Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20231018171713.1883517-21-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
f611d98a |
|
18-Oct-2023 |
Robert Richter <rrichter@amd.com> |
cxl/pci: Remove Component Register base address from struct cxl_dev_state The Component Register base address @component_reg_phys is no longer used after the rework of the Component Register setup which now uses struct member @reg_map instead. Remove the base address. Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20231018171713.1883517-9-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
2dd18279 |
|
18-Oct-2023 |
Robert Richter <rrichter@amd.com> |
cxl/pci: Store the endpoint's Component Register mappings in struct cxl_dev_state Same as for ports and dports, also store the endpoint's Component Register mappings, use struct cxl_dev_state for that. Keep the Component Register base address @component_reg_phys a bit to not break functionality. It will be removed after the transition in a later patch. Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20231018171713.1883517-7-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
dd22581f |
|
18-Oct-2023 |
Robert Richter <rrichter@amd.com> |
cxl/core/regs: Rename @dev to @host in struct cxl_register_map The primary role of @dev is to host the mappings for devm operations. @dev is too ambiguous as a name. I.e. when does @dev refer to the 'struct device *' instance that the registers belong, and when does @dev refer to the 'struct device *' instance hosting the mapping for devm operations? Clarify the role of @dev in cxl_register_map by renaming it to @host. Also, rename local variables to 'host' where map->host is used. Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20231018171713.1883517-3-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
33981838 |
|
04-Oct-2023 |
Dan Williams <dan.j.williams@intel.com> |
cxl/memdev: Fix sanitize vs decoder setup locking The sanitize operation is destructive and the expectation is that the device is unmapped while in progress. The current implementation does a lockless check for decoders being active, but then does nothing to prevent decoders from racing to be committed. Introduce state tracking to resolve this race. This incidentally cleans up unpriveleged userspace from triggering mmio read cycles by spinning on reading the 'security/state' attribute. Which at a minimum is a waste since the kernel state machine can cache the completion result. Lastly cxl_mem_sanitize() was mistakenly marked EXPORT_SYMBOL() in the original implementation, but an export was never required. Fixes: 0c36b6ad436a ("cxl/mbox: Add sanitization handling machinery") Cc: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
5f2da197 |
|
04-Oct-2023 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Fix sanitize notifier setup Fix a race condition between the mailbox-background command interrupt firing and the security-state sysfs attribute being removed. The race is difficult to see due to the awkward placement of the sanitize-notifier setup code and the multiple places the teardown calls are made, cxl_memdev_security_init() and cxl_memdev_security_shutdown(). Unify setup in one place, cxl_sanitize_setup_notifier(). Arrange for the paired cxl_sanitize_teardown_notifier() to safely quiet the notifier and let the cxl_memdev + irq be unregistered later in the flow. Note: The special wrinkle of the sanitize notifier is that it interacts with interrupts, which are enabled early in the flow, and it interacts with memdev sysfs which is not initialized until late in the flow. Hence why this setup routine takes an @cxlmd argument, and not just @mds. This fix is also needed as a preparation fix for a memdev unregistration crash. Reported-by: Jonathan Cameron <Jonathan.Cameron@Huawei.com> Closes: http://lore.kernel.org/r/20230929100316.00004546@Huawei.com Cc: Dave Jiang <dave.jiang@intel.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Fixes: 0c36b6ad436a ("cxl/mbox: Add sanitization handling machinery") Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
f29a824b |
|
04-Oct-2023 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Clarify devm host for memdev relative setup It is all too easy to get confused about @dev usage in the CXL driver stack. Before adding a new cxl_pci_probe() setup operation that has a devm lifetime dependent on @cxlds->dev binding, but also references @cxlmd->dev, and prints messages, rework the devm_cxl_add_memdev() and cxl_memdev_setup_fw_upload() function signatures to make this distinction explicit. I.e. pass in the devm context as an @host argument rather than infer it from other objects. This is in preparation for adding a devm_cxl_sanitize_setup_notifier(). Note the whitespace fixup near the change of the devm_cxl_add_memdev() signature. That uncaught typo originated in the patch that added cxl_memdev_security_init(). Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
08b8a8c05 |
|
03-Oct-2023 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Remove hardirq handler for cxl_request_irq() Now that all callers of cxl_request_irq() are using threaded irqs, drop the hardirq handler option. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
e30a1065 |
|
29-Sep-2023 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Cleanup 'sanitize' to always poll In preparation for fixing the init/teardown of the 'sanitize' workqueue and sysfs notification mechanism, arrange for cxl_mbox_sanitize_work() to be the single location where the sysfs attribute is notified. With that change there is no distinction between polled mode and interrupt mode. All the interrupt does is accelerate the polling interval. The change to check for "mds->security.sanitize_node" under the lock is there to ensure that the interrupt, the work routine and the setup/teardown code can all have a consistent view of the registered notifier and the workqueue state. I.e. the expectation is that the interrupt is live past the point that the sanitize sysfs attribute is published, and it may race teardown, so it must be consulted under a lock. Given that new locking requirement, cxl_pci_mbox_irq() is moved from hard to thread irq context. Lastly, some opportunistic replacements of "queue_delayed_work(system_wq, ...)", which is just open coded schedule_delayed_work(), are included. Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
76fe8713 |
|
29-Sep-2023 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Remove unnecessary device reference management in sanitize work Given that any particular put_device() could be the final put of the device, the fact that there are usages of cxlds->dev after put_device(cxlds->dev) is a red flag. Drop the reference counting since the device is pinned by being registered and will not be unregistered without triggering the driver + workqueue to shutdown. Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
1b27978d |
|
17-May-2023 |
Ira Weiny <ira.weiny@intel.com> |
cxl/pci: Update comment The existence of struct cxl_dev_id containing a single member is odd. The comment made sense when I wrote it but could be clarified. Update the comment and place it next to the odd looking structure. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230426-cxl-fixes-v1-2-870c4c8b463a@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
55b8ff06 |
|
23-Aug-2023 |
Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com> |
cxl/pci: Replace host_bridge->native_aer with pcie_aer_is_native() Use pcie_aer_is_native() to determine the native AER ownership as the usage of host_bride->native_aer does not cover command line override of AER ownership. Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com> Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Reviewed-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230823234305.27333-4-Smita.KoralahalliChannabasappa@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
0339dc39 |
|
23-Aug-2023 |
Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com> |
cxl/pci: Fix appropriate checking for _OSC while handling CXL RAS registers cxl_pci fails to unmask CXL protocol errors when CXL memory error reporting is not granted native control. Given that CXL memory error reporting uses the event interface and protocol errors use AER, unmask protocol errors based only on the native AER setting. Without this change end user deployments will fail to report protocol errors in the case where native memory error handling is not granted to Linux. Also, return zero instead of an error code to not block the communication with the cxl device when in native memory error reporting mode. Fixes: 248529edc86f ("cxl: add RAS status unmasking for CXL") Cc: <stable@vger.kernel.org> Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com> Reviewed-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230823234305.27333-2-Smita.KoralahalliChannabasappa@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
71baec7b |
|
27-Jun-2023 |
Davidlohr Bueso <dave@stgolabs.net> |
cxl/pci: Use correct flag for sanitize polling This is a bogus value, left behind from a previous version. Fixes: 0c36b6ad436a ("cxl/mbox: Add sanitization handling machinery") Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/7q3vcjqidtmxmys4n34g6b3mygvhaen7yikzxanpz56lw43fz7@7subbtbfkmyx Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
9521875b |
|
14-Jun-2023 |
Vishal Verma <vishal.l.verma@intel.com> |
cxl: add a firmware update mechanism using the sysfs firmware loader The sysfs based firmware loader mechanism was created to easily allow userspace to upload firmware images to FPGA cards. This also happens to be pretty suitable to create a user-initiated but kernel-controlled firmware update mechanism for CXL devices, using the CXL specified mailbox commands. Since firmware update commands can be long-running, and can be processed in the background by the endpoint device, it is desirable to have the ability to chunk the firmware transfer down to smaller pieces, so that one operation does not monopolize the mailbox, locking out any other long running background commands entirely - e.g. security commands like 'sanitize' or poison scanning operations. The firmware loader mechanism allows a natural way to perform this chunking, as after each mailbox command, that is restricted to the maximum mailbox payload size, the cxl memdev driver relinquishes control back to the fw_loader system and awaits the next chunk of data to transfer. This opens opportunities for other background commands to access the mailbox and send their own slices of background commands. Add the necessary helpers and state tracking to be able to perform the 'Get FW Info', 'Transfer FW', and 'Activate FW' mailbox commands as described in the CXL spec. Wire these up to the firmware loader callbacks, and register with that system to create the memX/firmware/ sysfs ABI. Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Jonathan Cameron <Jonathan.Cameron@Huawei.com> Cc: Russ Weight <russell.h.weight@intel.com> Cc: Alison Schofield <alison.schofield@intel.com> Cc: Ira Weiny <ira.weiny@intel.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Ben Widawsky <bwidawsk@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Link: https://lore.kernel.org/r/20230602-vv-fw_update-v4-1-c6265bd7343b@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
48dcdbb1 |
|
12-Jun-2023 |
Davidlohr Bueso <dave@stgolabs.net> |
cxl/mem: Wire up Sanitization support Implement support for CXL 3.0 8.2.9.8.5.1 Sanitize. This is done by adding a security/sanitize' memdev sysfs file to trigger the operation and extend the status file to make it poll(2)-capable for completion. Unlike all other background commands, this is the only operation that is special and monopolizes the device for long periods of time. In addition to the traditional pmem security requirements, all regions must also be offline in order to perform the operation. This permits avoiding explicit global CPU cache management, relying instead on the implict cache management when a region transitions between CXL_CONFIG_ACTIVE and CXL_CONFIG_COMMIT. The expectation is that userspace can use it such as: cxl disable-memdev memX echo 1 > /sys/bus/cxl/devices/memX/security/sanitize cxl wait-sanitize memX cxl enable-memdev memX Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/20230612181038.14421-5-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
0c36b6ad |
|
12-Jun-2023 |
Davidlohr Bueso <dave@stgolabs.net> |
cxl/mbox: Add sanitization handling machinery Sanitization is by definition a device-monopolizing operation, and thus the timeslicing rules for other background commands do not apply. As such handle this special case asynchronously and return immediately. Subsequent changes will allow completion to be pollable from userspace via a sysfs file interface. For devices that don't support interrupts for notifying background command completion, self-poll with the caveat that the poller can be out of sync with the ready hardware, and therefore care must be taken to not allow any new commands to go through until the poller sees the hw completion. The poller takes the mbox_mutex to stabilize the flagging, minimizing any runtime overhead in the send path to check for 'sanitize_tmo' for uncommon poll scenarios. The irq case is much simpler as hardware will serialize/error appropriately. Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230612181038.14421-4-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
8ea9c33d |
|
12-Jun-2023 |
Davidlohr Bueso <dave@stgolabs.net> |
cxl/mbox: Allow for IRQ_NONE case in the isr For cases when the mailbox background operation is not complete, do not "handle" the interrupt, as it was not from this device. And furthermore there are no racy scenarios such as the hw being out of sync with the driver and starting a new background op behind its back. Reported-by: Jonathan Cameron <Jonathan.Cameron@Huawei.com> Fixes: ccadf1310fb (cxl/mbox: Add background cmd handling machinery) Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230612181038.14421-2-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
f3c8a37a |
|
14-Jun-2023 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Unconditionally unmask 256B Flit errors The current check for 256B Flit mode is incomplete and unnecessary. It is incomplete because it fails to consider the link speed, or check for CXL link capabilities. It is unnecessary because unconditionally unmasking 256B Flit errors is a nop when 256B Flit operation is not available. Remove this check in preparation for creating a cxl_probe_link() helper to centralize this detection. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679263124.3436160.6228910132469454346.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
59f8d151 |
|
14-Jun-2023 |
Dan Williams <dan.j.williams@intel.com> |
cxl/mbox: Move mailbox related driver state to its own data structure 'struct cxl_dev_state' makes too many assumptions about the capabilities of a CXL device. In particular it assumes a CXL device has a mailbox and all of the infrastructure and state that comes along with that. In preparation for supporting accelerator / Type-2 devices that may not have a mailbox and in general maintain a minimal core context structure, make mailbox functionality a super-set of 'struct cxl_dev_state' with 'struct cxl_memdev_state'. With this reorganization it allows for CXL devices that support HDM decoder mapping, but not other general-expander / Type-3 capabilities, to only enable that subset without the rest of the mailbox infrastructure coming along for the ride. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679260240.3436160.15520641540463704524.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
733b57f2 |
|
22-Jun-2023 |
Robert Richter <rrichter@amd.com> |
cxl/pci: Early setup RCH dport component registers from RCRB CXL RAS capabilities must be enabled and accessible as soon as the CXL endpoint is detected in the PCI hierarchy and bound to the cxl_pci driver. This needs to be independent of other modules such as cxl_port or cxl_mem. CXL RAS capabilities reside in the Component Registers. For an RCH this is determined by probing RCRB which is implemented very late once the CXL Memory Device is created. Change this by moving the RCRB probe to the cxl_pci driver. Do this by using a new introduced function cxl_pci_find_port() similar to cxl_mem_find_port() to determine the involved dport by the endpoint's PCI handle. Plug this into the existing cxl_pci_setup_regs() function to setup Component Registers. Probe the RCRB in case the Component Registers cannot be located through the CXL Register Locator capability. This unifies code and early sets up the Component Registers at the same time for both, VH and RCH mode. Only the cxl_pci driver is involved for this. This allows an early mapping of the CXL RAS capability registers. Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-14-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
f1d0525e |
|
22-Jun-2023 |
Robert Richter <rrichter@amd.com> |
cxl/regs: Remove early capability checks in Component Register setup When probing the Component Registers in function cxl_probe_regs() there are also checks for the existence of the HDM and RAS capabilities. The checks may fail for components that do not implement the HDM capability causing the Component Registers setup to fail too. Remove the checks for a generalized use of cxl_probe_regs() and check them directly before mapping the RAS or HDM capabilities. This allows it to setup other Component Registers esp. of an RCH Downstream Port, which will be implemented in a follow-on patch. Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-12-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
d076bb8c |
|
22-Jun-2023 |
Terry Bowman <terry.bowman@amd.com> |
cxl/pci: Refactor component register discovery for reuse The endpoint implements component register setup code. Refactor it for reuse with RCRB, downstream port, and upstream port setup. Move PCI specifics from cxl_setup_regs() into cxl_pci_setup_regs(). Move cxl_setup_regs() into cxl/core/regs.c and export it. This also includes supporting static functions cxl_map_registerblock(), cxl_unmap_register_block() and cxl_probe_regs(). Co-developed-by: Robert Richter <rrichter@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-8-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
57340804 |
|
22-Jun-2023 |
Robert Richter <rrichter@amd.com> |
cxl/core/regs: Add @dev to cxl_register_map The corresponding device of a register mapping is used for devm operations and logging. For operations with struct cxl_register_map the device needs to be kept track separately. To simpify the involved function interfaces, add @dev to cxl_register_map. While at it also reorder function arguments of cxl_map_device_regs() and cxl_map_component_regs() to have the object @cxl_register_map first. As a result a bunch of functions are available to be used with a @cxl_register_map object. This patch is in preparation of reworking the component register setup code. Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-7-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
1ad3f701 |
|
26-May-2023 |
Jonathan Cameron <Jonathan.Cameron@huawei.com> |
cxl/pci: Find and register CXL PMU devices CXL PMU devices can be found from entries in the Register Locator DVSEC. Reviewed-by: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230526095824.16336-4-Jonathan.Cameron@huawei.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
ccadf131 |
|
23-May-2023 |
Davidlohr Bueso <dave@stgolabs.net> |
cxl/mbox: Add background cmd handling machinery This adds support for handling background operations, as defined in the CXL 3.0 spec. Commands that can take too long (over ~2 seconds) can run in the background asynchronously (to the hardware). The driver will deal with such commands synchronously, blocking all other incoming commands for a specified period of time, allowing time-slicing the command such that the caller can send incremental requests to avoid monopolizing the driver/device. Any out of sync (timeout) between the driver and hardware is just disregarded as an invalid state until the next successful submission. Such timeouts are considered a rare occurrence, either a real device problem or a driver issue that needs to reduce the size of the background operation to fit the timeout. On devices where mbox interrupts are supported, this will still use a poller that will wakeup in the specified wait intervals. The irq handler will simply awake the blocked cmd, which is also safe vs a task that is either waking (timing out) or already awoken. Similarly any irq setup error during the probing falls back to polling, thus avoids unnecessarily erroring out. Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/20230523170927.20685-5-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
9f7a320d |
|
23-May-2023 |
Davidlohr Bueso <dave@stgolabs.net> |
cxl/pci: Introduce cxl_request_irq() Factor out common functionality/semantics for cxl shared interrupts into a new helper on top of devm_request_irq(). Suggested-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/20230523170927.20685-4-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
f279d0bc |
|
23-May-2023 |
Davidlohr Bueso <dave@stgolabs.net> |
cxl/pci: Allocate irq vectors earlier during probe Move the cxl_alloc_irq_vectors() call further up in the probing in order to allow for mailbox interrupt usage. No change in semantics. Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/20230523170927.20685-3-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
e764f122 |
|
18-May-2023 |
Dave Jiang <dave.jiang@intel.com> |
cxl: Move cxl_await_media_ready() to before capacity info retrieval Move cxl_await_media_ready() to cxl_pci probe before driver starts issuing IDENTIFY and retrieving memory device information to ensure that the device is ready to provide the information. Allow cxl_pci_probe() to succeed even if media is not ready. Cache the media failure in cxlds and don't ask the device for any media information. The rationale for proceeding in the !media_ready case is to allow for mailbox operations to interrogate and/or remediate the device. After media is repaired then rebinding the cxl_pci driver is expected to restart the capacity scan. Suggested-by: Dan Williams <dan.j.williams@intel.com> Fixes: b39cb1052a5c ("cxl/mem: Register CXL memX devices") Reviewed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168445310026.3251520.8124296540679268206.stgit@djiang5-mobl3 [djbw: fixup cxl_test] Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
d0abf578 |
|
18-Apr-2023 |
Alison Schofield <alison.schofield@intel.com> |
cxl/mbox: Initialize the poison state Driver reads of the poison list are synchronized to ensure that a reader does not get an incomplete list because their request overlapped (was interrupted or preceded by) another read request of the same DPA range. (CXL Spec 3.0 Section 8.2.9.8.4.1). The driver maintains state information to achieve this goal. To initialize the state, first recognize the poison commands in the CEL (Command Effects Log). If the device supports Get Poison List, allocate a single buffer for the poison list and protect it with a lock. Signed-off-by: Alison Schofield <alison.schofield@intel.com> Link: https://lore.kernel.org/r/9078d180769be28a5087288b38cdfc827cae58bf.1681838291.git.alison.schofield@intel.com Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Tested-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
af0a6c35 |
|
11-Mar-2023 |
Lukas Wunner <lukas@wunner.de> |
cxl/pci: Use CDAT DOE mailbox created by PCI core The PCI core has just been amended to create a pci_doe_mb struct for every DOE instance on device enumeration. Drop creation of a (duplicate) CDAT DOE mailbox on cxl probing in favor of the one already created by the PCI core. Tested-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/becaf70e8faf9681d474200117d62d7eaac46cca.1678543498.git.lukas@wunner.de Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
248529ed |
|
14-Feb-2023 |
Dave Jiang <dave.jiang@intel.com> |
cxl: add RAS status unmasking for CXL By default the CXL RAS mask registers bits are defaulted to 1's and suppress all error reporting. If the kernel has negotiated ownership of error handling for CXL then unmask the mask registers by writing 0s. PCI_EXP_DEVCTL capability is checked to see uncorrectable or correctable errors bits are set before unmasking the respective errors. Acked-by: Bjorn Helgaas <bhelgaas@google.com> # pci_regs.h Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/167639402301.778884.12556849214955646539.stgit@djiang5-mobl3.local Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
1922a6dc |
|
13-Feb-2023 |
Dave Jiang <dave.jiang@intel.com> |
cxl: remove unnecessary calling of pci_enable_pcie_error_reporting() With this [1] commit upstream, pci_enable_pci_error_report() is no longer necessary for the driver to call. Remove call and related cleanups. [1]: f26e58bf6f54 ("PCI/AER: Enable error reporting when AER is native") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/167632012093.4153151.5360778069735064322.stgit@djiang5-mobl3.local Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
5a84711f |
|
30-Jan-2023 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Fix irq oneshot expectations The IRQ core expects that users of the default hardirq handler specify IRQF_ONESHOT to keep interrupts disabled until the threaded handler runs. That meets the CXL driver's expectations since it is an edge triggered MSI and this flag would have been passed by default using pci_request_irq() instead of devm_request_threaded_irq(). Fixes: a49aa8141b65 ("cxl/mem: Wire up event interrupts") Reported-by: kernel test robot <lkp@intel.com> Reported-by: Julia Lawall <julia.lawall@lip6.fr> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
fa884345 |
|
30-Jan-2023 |
Jonathan Cameron <Jonathan.Cameron@huawei.com> |
cxl/pci: Set the device timestamp CXL r3.0 section 8.2.9.4.2 "Set Timestamp" recommends that the host sets the timestamp after every Conventional or CXL Reset to ensure accurate timestamps. This should include on initial boot up. The time base that is being set is used by a device for the poison list overflow timestamp and all event timestamps. Note that the command is optional and if not supported and the device cannot return accurate timestamps it will fill the fields in with an appropriate marker (see the specification description of each timestamp). Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230130151327.32415-1-Jonathan.Cameron@huawei.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
a49aa814 |
|
17-Jan-2023 |
Davidlohr Bueso <dave@stgolabs.net> |
cxl/mem: Wire up event interrupts Currently the only CXL features targeted for irq support require their message numbers to be within the first 16 entries. The device may however support less than 16 entries depending on the support it provides. Attempt to allocate these 16 irq vectors. If the device supports less then the PCI infrastructure will allocate that number. Upon successful allocation, users can plug in their respective isr at any point thereafter. CXL device events are signaled via interrupts. Each event log may have a different interrupt message number. These message numbers are reported in the Get Event Interrupt Policy mailbox command. Add interrupt support for event logs. Interrupts are allocated as shared interrupts. Therefore, all or some event logs can share the same message number. In addition all logs are queried on any interrupt in order of the most to least severe based on the status register. Finally place all event configuration logic into cxl_event_config(). Previously the logic was a simple 'read all' on start up. But interrupts must be configured prior to any reads to ensure no events are missed. A single event configuration function results in a cleaner over all implementation. Cc: Bjorn Helgaas <helgaas@kernel.org> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Co-developed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/20221216-cxl-ev-log-v7-2-2316a5c8f7d8@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
6ebe28f9 |
|
17-Jan-2023 |
Ira Weiny <ira.weiny@intel.com> |
cxl/mem: Read, trace, and clear events on driver load CXL devices have multiple event logs which can be queried for CXL event records. Devices are required to support the storage of at least one event record in each event log type. Devices track event log overflow by incrementing a counter and tracking the time of the first and last overflow event seen. Software queries events via the Get Event Record mailbox command; CXL rev 3.0 section 8.2.9.2.2 and clears events via CXL rev 3.0 section 8.2.9.2.3 Clear Event Records mailbox command. If the result of negotiating CXL Error Reporting Control is OS control, read and clear all event logs on driver load. Ensure a clean slate of events by reading and clearing the events on driver load. The status register is not used because a device may continue to trigger events and the only requirement is to empty the log at least once. This allows for the required transition from empty to non-empty for interrupt generation. Handling of interrupts is in a follow on patch. The device can return up to 1MB worth of event records per query. Allocate a shared large buffer to handle the max number of records based on the mailbox payload size. This patch traces a raw event record and leaves specific event record type tracing to subsequent patches. Macros are created to aid in tracing the common CXL Event header fields. Each record is cleared explicitly. A clear all bit is specified but is only valid when the log overflows. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/20221216-cxl-ev-log-v7-1-2316a5c8f7d8@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
852db33c |
|
03-Jan-2023 |
Robert Richter <rrichter@amd.com> |
cxl/pci: Show opcode in debug messages when sending a command For debugging it is very helpful to see which commands are sent. Add it to the debug message. Signed-off-by: Robert Richter <rrichter@amd.com> Link: https://lore.kernel.org/r/20230103210151.1126873-1-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
4a20bc3e |
|
08-Dec-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Move tracepoint definitions to drivers/cxl/core/ CXL is using tracepoints for reporting RAS capability register payloads for AER events, and has plans to use tracepoints for the output payload of Get Poison List and Get Event Records commands. For organization purposes it would be nice to keep those all under a single + local CXL trace system. This also organization also potentially helps in the future when CXL drivers expand beyond generic memory expanders, however that would also entail a move away from the expander-specific cxl_dev_state context, save that for later. Note that the powerpc-specific drivers/misc/cxl/ also defines a 'cxl' trace system, however, it is unlikely that a single platform will ever load both drivers simultaneously. Cc: Steven Rostedt <rostedt@goodmis.org> Tested-by: Alison Schofield <alison.schofield@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/167051869176.436579.9728373544811641087.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
2ec1b17f |
|
06-Jan-2023 |
Dave Jiang <dave.jiang@intel.com> |
cxl: fix cxl_report_and_clear() RAS UE addr mis-assignment 'addr' that contains RAS UE register address is re-assigned to RAS_CAP_CONTROL offset if there are multiple UE errors. Use different addr variable to avoid the reassignment mistake. Fixes: 2905cb5236cb ("cxl/pci: Add (hopeful) error handling support") Reported-by: Jonathan Cameron <jonathan.cameron@huawei.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/167302318779.580155.15233596744650706167.stgit@djiang5-mobl3.local Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
9b5f77ef |
|
05-Dec-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Remove endian confusion readl() already handles endian conversion. That's the main difference between readl() and __raw_readl(). This is benign on little-endian systems, but big endian systems will end up byte-swabbing twice. Fixes: 2905cb5236cb ("cxl/pci: Add (hopeful) error handling support") Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/167030092025.4045167.10651070153523351093.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
372ab3bc |
|
05-Dec-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Add some type-safety to the AER trace points The first argument to the CXL AER trace points is the source device. Pass a 'const struct device *' rather than a 'const char *' for more type precision / safety. Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/167030091477.4045167.15174636482098463885.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
0a19bfc8 |
|
01-Dec-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/port: Add RCD endpoint port enumeration Unlike a CXL memory expander in a VH topology that has at least one intervening 'struct cxl_port' instance between itself and the CXL root device, an RCD attaches one-level higher. For example: VH ┌──────────┐ │ ACPI0017 │ │ root0 │ └─────┬────┘ │ ┌─────┴────┐ │ dport0 │ ┌─────┤ ACPI0016 ├─────┐ │ │ port1 │ │ │ └────┬─────┘ │ │ │ │ ┌──┴───┐ ┌──┴───┐ ┌───┴──┐ │dport0│ │dport1│ │dport2│ │ RP0 │ │ RP1 │ │ RP2 │ └──────┘ └──┬───┘ └──────┘ │ ┌───┴─────┐ │endpoint0│ │ port2 │ └─────────┘ ...vs: RCH ┌──────────┐ │ ACPI0017 │ │ root0 │ └────┬─────┘ │ ┌───┴────┐ │ dport0 │ │ACPI0016│ └───┬────┘ │ ┌────┴─────┐ │endpoint0 │ │ port1 │ └──────────┘ So arrange for endpoint port in the RCH/RCD case to appear directly connected to the host-bridge in its singular role as a dport. Compare that to the VH case where the host-bridge serves a dual role as a 'cxl_dport' for the CXL root device *and* a 'cxl_port' upstream port for the Root Ports in the Root Complex that are modeled as 'cxl_dport' instances in the CXL topology. Another deviation from the VH case is that RCDs may need to look up their component registers from the Root Complex Register Block (RCRB). That platform firmware specified RCRB area is cached by the cxl_acpi driver and conveyed via the host-bridge dport to the cxl_mem driver to perform the cxl_rcrb_to_component() lookup for the endpoint port (See 9.11.8 CXL Devices Attached to an RCH for the lookup of the upstream port component registers). Tested-by: Robert Richter <rrichter@amd.com> Link: https://lore.kernel.org/r/166993045621.1882361.1730100141527044744.stgit@dwillia2-xfh.jf.intel.com Reviewed-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Camerom <Jonathan.Cameron@huawei.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
6155ccc9 |
|
30-Nov-2022 |
Dave Jiang <dave.jiang@intel.com> |
cxl/pci: Add callback to log AER correctable error Add AER error handler callback to read the RAS capability structure correctable error (CE) status register for the CXL device. Log the error as a trace event and clear the error. For CXL devices, the driver also needs to write back to the status register to clear the unmasked correctable errors. See CXL spec rev3.0 8.2.4.16 for RAS capability structure CE Status Register. Suggested-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/166985287203.2871899.13605149073500556137.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
2905cb52 |
|
29-Nov-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Add (hopeful) error handling support Add nominal error handling that tears down CXL.mem in response to error notifications that imply a device reset. Given some CXL.mem may be operating as System RAM, there is a high likelihood that these error events are fatal. However, if the system survives the notification the expectation is that the driver behavior is equivalent to a hot-unplug and re-plug of an endpoint. Note that this does not change the mask values from the default. That awaits CXL _OSC support to determine whether platform firmware is in control of the mask registers. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/166974413966.1608150.15522782911404473932.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
2f6e9c30 |
|
29-Nov-2022 |
Dave Jiang <dave.jiang@intel.com> |
cxl/pci: add tracepoint events for CXL RAS Add tracepoint events for recording the CXL uncorrectable and correctable errors. For uncorrectable errors, there is additional data of 512B from the header log register (CXL spec rev3 8.2.4.16.7). The trace event will intake a dynamic array that will dump the entire Header Log data. If multiple errors are set in the status register, then the 'first error' field (CXL spec rev3 v8.2.4.16.6) is read from the Error Capabilities and Control Register in order to determine the error. This implementation does not include CXL IDE Error details. Cc: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Link: https://lore.kernel.org/r/166974413388.1608150.5875712482260436188.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
bd09626b |
|
29-Nov-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Find and map the RAS Capability Structure The RAS Capability Structure has some ancillary information that may be relevant with respect to AER events, link and protcol error status registers. Map the RAS Capability Registers in support of defining a 'struct pci_error_handlers' instance for the cxl_pci driver. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/166974412803.1608150.7096566580400947001.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
6c7f4f1e |
|
29-Nov-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/core/regs: Make cxl_map_{component, device}_regs() device generic There is no need to carry the barno and the block offset through the stack, just convert them to a resource base immediately. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/166974411035.1608150.8605988708101648442.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
43a2fb3a |
|
29-Nov-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Kill cxl_map_regs() The component registers are currently unused by the cxl_pci driver. Only the physical address base of the component registers is conveyed to the cxl_mem driver. Just call cxl_map_device_registers() directly. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/166974410443.1608150.15855499736133349600.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
f17b558d |
|
01-Dec-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pmem: Refactor nvdimm device registration, delete the workqueue The three objects 'struct cxl_nvdimm_bridge', 'struct cxl_nvdimm', and 'struct cxl_pmem_region' manage CXL persistent memory resources. The bridge represents base platform resources, the nvdimm represents one or more endpoints, and the region is a collection of nvdimms that contribute to an assembled address range. Their relationship is such that a region is torn down if any component endpoints are removed. All regions and endpoints are torn down if the foundational bridge device goes down. A workqueue was deployed to manage these interdependencies, but it is difficult to reason about, and fragile. A recent attempt to take the CXL root device lock in the cxl_mem driver was reported by lockdep as colliding with the flush_work() in the cxl_pmem flows. Instead of the workqueue, arrange for all pmem/nvdimm devices to be torn down immediately and hierarchically. A similar change is made to both the 'cxl_nvdimm' and 'cxl_pmem_region' objects. For bisect-ability both changes are made in the same patch which unfortunately makes the patch bigger than desired. Arrange for cxl_memdev and cxl_region to register a cxl_nvdimm and cxl_pmem_region as a devres release action of the bridge device. Additionally, include a devres release action of the cxl_memdev or cxl_region device that triggers the bridge's release action if an endpoint exits before the bridge. I.e. this allows either unplugging the bridge, or unplugging and endpoint to result in the same cleanup actions. To keep the patch smaller the cleanup of the now defunct workqueue infrastructure is saved for a follow-on patch. Tested-by: Robert Richter <rrichter@amd.com> Link: https://lore.kernel.org/r/166993041773.1882361.16444301376147207609.stgit@dwillia2-xfh.jf.intel.com Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
487d828d |
|
26-Sep-2022 |
Ira Weiny <ira.weiny@intel.com> |
cxl/doe: Request exclusive DOE access The PCIE Data Object Exchange (DOE) mailbox is a protocol run over configuration cycles. It assumes one initiator at a time. While the kernel has control of the mailbox user space writes could interfere with the kernel access. Mark DOE mailbox config space exclusive when iterated by the CXL driver. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20220926215711.2893286-3-ira.weiny@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
3eddcc93 |
|
19-Jul-2022 |
Ira Weiny <ira.weiny@intel.com> |
cxl/pci: Create PCI DOE mailbox's for memory devices DOE mailbox objects will be needed for various mailbox communications with each memory device. Iterate each DOE mailbox capability and create PCI DOE mailbox objects as found. It is not anticipated that this is the final resting place for the iteration of the DOE devices. The support of switch ports will drive this code into the PCIe side. In this imagined architecture the CXL port driver would then query into the PCI device for the DOE mailbox array. For now creating the mailboxes in the CXL port is good enough for the endpoints. Later PCIe ports will need to support this to support switch ports more generically. Cc: Dan Williams <dan.j.williams@intel.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Lukas Wunner <lukas@wunner.de> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/20220719205249.566684-5-ira.weiny@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
d3b75029 |
|
21-May-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/mem: Convert partition-info to resources To date the per-device-partition DPA range information has only been used for enumeration purposes. In preparation for allocating regions from available DPA capacity, convert those ranges into DPA-type resource trees. With resources and the new add_dpa_res() helper some open coded end address calculations and debug prints can be cleaned. The 'cxlds->pmem_res' and 'cxlds->ram_res' resources are child resources of the total-device DPA space and they in turn will host DPA allocations from cxl_endpoint_decoder instances (tracked by cxled->dpa_res). Cc: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/165603878921.551046.8127845916514734142.stgit@dwillia2-xfh Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
14d78874 |
|
18-May-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/mem: Consolidate CXL DVSEC Range enumeration in the core In preparation for fixing the setting of the 'mem_enabled' bit in CXL DVSEC Control register, move all CXL DVSEC range enumeration into the same source file. Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/165291688886.1426646.15046138604010482084.stgit@dwillia2-xfh Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
2e4ba0ec |
|
18-May-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Move cxl_await_media_ready() to the core Allow cxl_await_media_ready() to be mocked for testing purposes rather than carrying the maintenance burden of an indirect function call in the mainline driver. With the move cxl_await_media_ready() can no longer reuse the mailbox timeout override, so add a media_ready_timeout module parameter to the core to backfill. Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/165291688340.1426646.4755627801983775011.stgit@dwillia2-xfh Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
194d5eda |
|
18-May-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Drop wait_for_valid() from cxl_await_media_ready() A check mem_info_valid already happens in __cxl_dvsec_ranges(). Rely on that instead of calling wait_for_valid again. Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/165291686632.1426646.7479581732894574486.stgit@dwillia2-xfh Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
1e14c9fb |
|
18-May-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Consolidate wait_for_media() and wait_for_media_ready() Now that wait_for_media() does nothing supplemental to wait_for_media_ready() just promote wait_for_media_ready() to a common helper and drop wait_for_media(). Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/165291686046.1426646.4390664747934592185.stgit@dwillia2-xfh Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
36bfc6ad |
|
14-Mar-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Make cxl_dvsec_ranges() failure not fatal to cxl_pci cxl_dvsec_ranges(), the helper for enumerating the presence of an active legacy CXL.mem configuration on a CXL 2.0 Memory Expander, is not fatal for cxl_pci because there is still value to enable mailbox operations even if CXL.mem operation is disabled. Recall that the reason cxl_pci does this initialization and not cxl_mem is to preserve the useful property (for unit testing) that cxl_mem is cxl_memdev + mmio generic, and does not require access to a 'struct pci_dev' to issue config cycles. Update 'struct cxl_endpoint_dvsec_info' to carry either a positive number of non-zero size legacy CXL DVSEC ranges, or the negative error code from __cxl_dvsec_ranges() in its @ranges member. Reported-by: Krzysztof Zach <krzysztof.zach@intel.com> Fixes: 560f78559006 ("cxl/pci: Retrieve CXL DVSEC memory info") Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/164730735869.3806189.4032428192652531946.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
e39f9be0 |
|
14-Mar-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Add debug for DVSEC range init failures In preparation for not treating DVSEC range initialization failures as fatal to cxl_pci_probe() add individual dev_dbg() statements for each of the major failure reasons in cxl_dvsec_ranges(). The rationale for cxl_dvsec_ranges() failure not being fatal is that there is still value for cxl_pci to enable mailbox operations even if CXL.mem operation is disabled. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/164730734812.3806189.2726330688692684104.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
c43e036d |
|
03-Apr-2022 |
Davidlohr Bueso <dave@stgolabs.net> |
cxl/mbox: Use new return_code handling Use the global cxl_mbox_cmd_rc table to improve debug messaging in __cxl_pci_mbox_send_cmd() and allow cxl_mbox_send_cmd() to map to proper kernel style errno codes - this patch continues to use -ENXIO only so no change in semantics. Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed by: Adam Manzanares <a.manzanares@samsung.com> Link: https://lore.kernel.org/r/20220404021216.66841-5-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
92fcc1ab |
|
03-Apr-2022 |
Davidlohr Bueso <dave@stgolabs.net> |
cxl/mbox: Improve handling of mbox_cmd hw return codes Upon a completed command the caller is still expected to check the actual return_code register to ensure it succeed. This adds, per the spec, the potential command return codes. It maps the hardware return code with the kernel's errno style, and by default continues to use -ENXIO (Command completed, but device reported an error). Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed by: Adam Manzanares <a.manzanares@samsung.com> Link: https://lore.kernel.org/r/20220404021216.66841-4-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
cbe83a20 |
|
03-Apr-2022 |
Davidlohr Bueso <dave@stgolabs.net> |
cxl/pci: Use CXL_MBOX_SUCCESS to check against mbox_cmd return code Also mention the need for the caller to check against any errors from the hardware in return_code. Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed by: Adam Manzanares <a.manzanares@samsung.com> Link: https://lore.kernel.org/r/20220404021216.66841-3-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
d2882041 |
|
08-Apr-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Drop shadowed variable 0day reports that wait_for_media_ready() declares an @rc variable twice. >> drivers/cxl/pci.c:439:7: warning: Local variable 'rc' shadows outer variable [shadowVariable] int rc; ^ drivers/cxl/pci.c:431:6: note: Shadowed declaration int rc, i; ^ drivers/cxl/pci.c:439:7: note: Shadow variable int rc; ^ Cc: Randy Dunlap <rdunlap@infradead.org> Fixes: 523e594d9cc0 ("cxl/pci: Implement wait for media active") Acked-by: Randy Dunlap <rdunlap@infradead.org> Tested-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: kernel test robot <lkp@intel.com> Reviewed-by: Vishal Verma <vishal.l.verma@intel.com> Link: https://lore.kernel.org/r/164944636936.455177.14136200464724208233.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
bcc79ea3 |
|
31-Jan-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Emit device serial number Per the CXL specification (8.1.12.2 Memory Device PCIe Capabilities and Extended Capabilities) the Device Serial Number capability is mandatory. Emit it for user tooling to identify devices. It is reasonable to ask whether the attribute should be added to the list of PCI sysfs device attributes. The PCI layer can optionally emit it too, but the CXL subsystem is aiming to preserve its independence and the possibility of CXL topologies with non-PCI devices in it. To date that has only proven useful for the 'cxl_test' model, but as can be seen with seen with ACPI0016 devices, sometimes all that is needed is a platform firmware table to point to CXL Component Registers in MMIO space to define a "CXL" device. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/164366608838.196598.16856227191534267098.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
523e594d |
|
23-Jan-2022 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Implement wait for media active CXL 2.0 8.1.3.8.2 states: Memory_Active: When set, indicates that the CXL Range 1 memory is fully initialized and available for software use. Must be set within Range 1. Memory_Active_Timeout of deassertion of reset to CXL device if CXL.mem HwInit Mode=1 Unfortunately, Memory_Active can take quite a long time depending on media size (up to 256s per 2.0 spec). Provide a callback for the eventual establishment of CXL.mem operations via the 'cxl_mem' driver the 'struct cxl_memdev'. The implementation waits for 60s by default for now and can be overridden by the mbox_ready_time module parameter. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> [djbw: switch to sleeping wait] Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/164298427373.3018233.9309741847039301834.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
560f7855 |
|
01-Feb-2022 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Retrieve CXL DVSEC memory info Before CXL 2.0 HDM Decoder Capability mechanisms can be utilized in a device the driver must determine that the device is ready for CXL.mem operation and that platform firmware, or some other agent, has established an active decode via the legacy CXL 1.1 decoder mechanism. This legacy mechanism is defined in the CXL DVSEC as a set of range registers and status bits that take time to settle after a reset. Validate the CXL memory decode setup via the DVSEC and cache it for later consideration by the cxl_mem driver (to be added). Failure to validate is not fatal to the cxl_pci driver since that is only providing CXL command support over PCI.mmio, and might be needed to rectify CXL DVSEC validation problems. Any potential ranges that the device is already claiming via DVSEC need to be reconciled with the dynamic provisioning ranges provided by platform firmware (like ACPI CEDT.CFMWS). Leave that reconciliation to the cxl_mem driver. [djbw: shorten defines] [djbw: change precise spin wait to generous msleep] Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> [djbw: clarify changelog] Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/164375911821.559935.7375160041663453400.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
06e279e5 |
|
01-Feb-2022 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Cache device DVSEC offset The PCIe device DVSEC, defined in the CXL 2.0 spec, 8.1.3 is required to be implemented by CXL 2.0 endpoint devices. In preparation for consuming this information in a new cxl_mem driver, retrieve the CXL DVSEC position and warn about the implications of not finding it. Allow for mailbox operation even if the CXL DVSEC is missing. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/164375309615.513620.7874131241128599893.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
4112a08d |
|
01-Feb-2022 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Store component register base in cxlds In preparation for defining a cxl_port object to represent the decoder resources of a memory expander capture the component register base address. The port driver uses the component register base to enumerate the HDM Decoder Capability structure. Unlike other cxl_port objects the endpoint port decodes from upstream SPA to downstream DPA rather than upstream port to downstream port. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reported-by: kernel test robot <lkp@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> [djbw: clarify changelog] Link: https://lore.kernel.org/r/164375084181.484304.3919737667590006795.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
af9cae9f |
|
23-Jan-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Rename pci.h to cxlpci.h Similar to the mem.h rename, if the core wants to reuse definitions from drivers/cxl/pci.h it is unable to use <pci.h> as that collides with archs that have an arch/$arch/include/asm/pci.h, like MIPS. Reported-by: kernel test robot <lkp@intel.com> Acked-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/164298422510.3018233.14693126572756675563.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
303ebc1b |
|
23-Jan-2022 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/acpi: Map component registers for Root Ports This implements the TODO in cxl_acpi for mapping component registers. cxl_acpi becomes the second consumer of CXL register block enumeration (cxl_pci being the first). Moving the functionality to cxl_core allows both of these drivers to use the functionality. Equally importantly it allows cxl_core to use the functionality in the future. CXL 2.0 root ports are similar to CXL 2.0 Downstream Ports with the main distinction being they're a part of the CXL 2.0 host bridge. While mapping their component registers is not immediately useful for the CXL drivers, the movement of register block enumeration into core is a vital step towards HDM decoder programming. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> [djbw: fix cxl_regmap_to_base() failure cases] Link: https://lore.kernel.org/r/164298415080.3018233.14694957480228676592.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
46c6ad27 |
|
23-Jan-2022 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl: Flesh out register names Get a better naming scheme in place for upcoming additions. By dropping redundant usages of CXL and DVSEC where appropriate we can get more concise and also more grepable defines. Reviewed-by: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/164298414022.3018233.15522855498759815097.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
4f195ee7 |
|
23-Jan-2022 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Defer mailbox status checks to command timeouts Device status can change without warning at any point in time. This effectively means that no amount of status checking before a command is submitted can guarantee that the device is not in an error condition when the command is later submitted. The clearest signal that a device is not able to process commands is if it fails to process commands. With the above understanding in hand, update cxl_pci_setup_mailbox() to validate the readiness of the mailbox once at the beginning of time, and then use timeouts and busy sequencing errors as the only occasions to report status. Just as before, unless and until the driver gains a reset recovery path, doorbell clearing failures by the device are fatal to mailbox operations. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/164298413480.3018233.9643395389297971819.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
229e8828 |
|
31-Jan-2022 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Implement Interface Ready Timeout The original driver implementation used the doorbell timeout for the Mailbox Interface Ready bit to piggy back off of, since the latter does not have a defined timeout. This functionality, introduced in commit 8adaf747c9f0 ("cxl/mem: Find device capabilities"), needs improvement as the recent "Add Mailbox Ready Time" ECN timeout indicates that the mailbox ready time can be significantly longer that 2 seconds. While the specification limits the maximum timeout to 256s, the cxl_pci driver gives up on the mailbox after 60s. This value corresponds with important timeout values already present in the kernel. A module parameter is provided as an emergency override and represents the default Linux policy for all devices. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> [djbw: add modparam, drop check_device_status()] Co-developed-by: Dan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/r/164367306565.208548.1932299464604450843.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
5e2411ae |
|
02-Nov-2021 |
Ira Weiny <ira.weiny@intel.com> |
cxl/memdev: Change cxl_mem to a more descriptive name The 'struct cxl_mem' object actually represents the state of a CXL device within the driver. Comments indicating that 'struct cxl_mem' is a device itself are incorrect. It is data layered on top of a CXL Memory Expander class device. Rename it 'struct cxl_dev_state'. The 'struct' cxl_memdev' structure represents a Linux CXL memory device object, and it uses services and information provided by 'struct cxl_dev_state'. Update the structure name, function names, and the kdocs to reflect the real uses of this structure. Some helper functions that were previously prefixed "cxl_mem_" are renamed to just "cxl_". Acked-by: Ben Widawsky <ben.widawsky@intel.com> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/20211102202901.3675568-3-ira.weiny@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
55006a2c |
|
09-Oct-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Use pci core's DVSEC functionality Reduce maintenance burden of DVSEC query implementation by using the centralized PCI core implementation. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> [djbw: kill cxl_pci_dvsec()] Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/163379788528.692348.11581080806976608802.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
85afc317 |
|
15-Oct-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Split cxl_pci_setup_regs() In preparation for moving parts of register mapping to cxl_core, split cxl_pci_setup_regs() into a helper that finds register blocks, (cxl_find_regblock()), and a generic wrapper that probes the precise register sets within a block (cxl_setup_regs()). Move the actual mapping (cxl_map_regs()) of the only register-set that cxl_pci cares about (memory device registers) up a level from the former cxl_pci_setup_regs() into cxl_pci_probe(). With this change the unused component registers are no longer mapped, but the helpers are primed to move into the core. [djbw: drop cxl_map_regs() for component registers] Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> [djbw: rebase on the cxl_register_map refactor] Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/163434053788.914258.18412599112859205220.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
a261e9a1 |
|
15-Oct-2021 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Add @base to cxl_register_map In addition to carrying @barno, @block_offset, and @reg_type, add @base to keep all map/unmap parameters in one object. The helpers cxl_{map,unmap}_regblock() handle adjusting @base to the @block_offset at map and unmap time. Document that @base incorporates @block_offset so that downstream consumers of a mapped cxl_register_map instance do not need perform any fixups / can use @base directly. Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/163433497228.889435.11271988238496181536.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
7dc7a64d |
|
13-Oct-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Make more use of cxl_register_map The structure exists to pass around information about register mapping. Use it for passing @barno and @block_offset, and eliminate duplicate local variables. The helpers that use @map do not care about @cxlm, so just pass them a pdev instead. [djbw: reorder before cxl_pci_setup_regs() refactor to improver readability] Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reported-by: kernel test robot <lkp@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> [djbw: separate @base conversion] Link: https://lore.kernel.org/r/163416901172.806743.10056306321247850914.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
84e36a9d |
|
09-Oct-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Remove pci request/release regions Quoting Dan, "... the request + release regions should probably just be dropped. It's not like any of the register enumeration would collide with someone else who already has the registers mapped. The collision only comes when the registers are mapped for their final usage, and that will have more precision in the request." Suggested-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/163379785872.692348.8981679111988251260.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
ca76a3a8 |
|
15-Oct-2021 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Fix NULL vs ERR_PTR confusion cxl_pci_map_regblock() may return an ERR_PTR(), but cxl_pci_setup_regs() is only prepared for NULL as the error case. Pick the minimal fix for -stable backport purposes and just have cxl_pci_map_regblock() return NULL for errors. Fixes: f8a7e8c29be8 ("cxl/pci: Reserve all device regions at once") Cc: <stable@vger.kernel.org> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/163433325724.834522.17809774578178224149.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
d22fed9c |
|
09-Oct-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Remove dev_dbg for unknown register blocks While interesting to driver developers, the dev_dbg message doesn't do much except clutter up logs. This information should be attainable through sysfs, and someday lspci like utilities. This change additionally helps reduce the LOC in a subsequent patch to refactor some of cxl_pci register mapping. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/163379784717.692348.3478221381958300790.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
ed97afb5 |
|
13-Sep-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Disambiguate cxl_pci further from cxl_mem Commit 21e9f76733a8 ("cxl: Rename mem to pci") introduced the cxl_pci driver which had formerly been named cxl_mem. At the time, the goal was to be as light touch as possible because there were other patches in flight. Since things have settled now, and a new cxl_mem driver will be introduced shortly, spend the LOC now to clean up the existing names. While here, fix the kernel docs to explain the situation better after the core rework that has already landed. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/20210913163324.1008564-4-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
5a2328f4 |
|
08-Sep-2021 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Use module_pci_driver Now that cxl_mem_{init,exit} no longer need to manage debugfs, switch back to the smaller form of the boiler plate. Acked-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/163116435825.2460985.7201322215431441130.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
4faf31b4 |
|
08-Sep-2021 |
Dan Williams <dan.j.williams@intel.com> |
cxl/mbox: Move mailbox and other non-PCI specific infrastructure to the core Now that the internals of mailbox operations are abstracted from the PCI specifics a bulk of infrastructure can move to the core. The CXL_PMEM driver intends to proxy LIBNVDIMM UAPI and driver requests to the equivalent functionality provided by the CXL hardware mailbox interface. In support of that intent move the mailbox implementation to a shared location for the CXL_PCI driver native IOCTL path and CXL_PMEM nvdimm command proxy path to share. A unit test framework seeks to implement a unit test backend transport for mailbox commands to communicate mocked up payloads. It can reuse all of the mailbox infrastructure minus the PCI specifics, so that also gets moved to the core. Finally with the mailbox infrastructure and ioctl handling being transport generic there is no longer any need to pass file file_operations to devm_cxl_add_memdev(). That allows all the ioctl boilerplate to move into the core for unit test reuse. No functional change intended, just code movement. Acked-by: Ben Widawsky <ben.widawsky@intel.com> Reported-by: kernel test robot <lkp@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/163116435233.2460985.16197340449713287180.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
4cb35f1c |
|
08-Sep-2021 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Drop idr.h Commit 3d135db51024 ("cxl/core: Move memdev management to core") left this straggling include for cxl_memdev setup. Clean it up. Cc: Ben Widawsky <ben.widawsky@intel.com> Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/163116434668.2460985.12264757586266849616.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
b64955a9 |
|
08-Sep-2021 |
Dan Williams <dan.j.williams@intel.com> |
cxl/mbox: Introduce the mbox_send operation In preparation for implementing a unit test backend transport for ioctl operations, and making the mailbox available to the cxl/pmem infrastructure, move the existing PCI specific portion of mailbox handling to an "mbox_send" operation. With this split all the PCI-specific transport details are comprehended by a single operation and the rest of the mailbox infrastructure is 'struct cxl_mem' and 'struct cxl_memdev' generic. Acked-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/163116434098.2460985.9004760022659400540.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
13e7749d |
|
13-Sep-2021 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Clean up cxl_mem_get_partition_info() Commit 0b9159d0ff21 ("cxl/pci: Store memory capacity values") missed updating the kernel-doc for 'struct cxl_mem' leading to the following warnings: ./scripts/kernel-doc -v drivers/cxl/cxlmem.h 2>&1 | grep warn drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'total_bytes' not described in 'cxl_mem' drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'volatile_only_bytes' not described in 'cxl_mem' drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'persistent_only_bytes' not described in 'cxl_mem' drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'partition_align_bytes' not described in 'cxl_mem' drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'active_volatile_bytes' not described in 'cxl_mem' drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'active_persistent_bytes' not described in 'cxl_mem' drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'next_volatile_bytes' not described in 'cxl_mem' drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'next_persistent_bytes' not described in 'cxl_mem' Also, it is redundant to describe those same parameters in the kernel-doc for cxl_mem_get_partition_info(). Given the only user of that routine updates the values in @cxlm, just do that implicitly internal to the helper. Cc: Ira Weiny <ira.weiny@intel.com> Reported-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/163157174216.2653013.1277706528753990974.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
99e222a5 |
|
08-Sep-2021 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Make 'struct cxl_mem' device type generic In preparation for adding a unit test provider of a cxl_memdev, convert the 'struct cxl_mem' driver context to carry a generic device rather than a pci device. Note, some dev_dbg() lines needed extra reformatting per clang-format. This conversion also allows the cxl_mem_create() and devm_cxl_add_memdev() calling conventions to be simplified. The "host" for a cxl_memdev, must be the same device for the driver that allocated @cxlm. Acked-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/163116432973.2460985.7553504957932024222.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
da582aa5 |
|
03-Sep-2021 |
Li Qiang (Johnny Li) <johnny.li@montage-tech.com> |
cxl/pci: Fix debug message in cxl_probe_regs() Indicator string for mbox and memdev register set to status incorrectly in error message. Cc: <stable@vger.kernel.org> Fixes: 30af97296f48 ("cxl/pci: Map registers based on capabilities") Signed-off-by: Li Qiang (Johnny Li) <johnny.li@montage-tech.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/163072205089.2250120.8103605864156687395.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
9e56614c |
|
03-Sep-2021 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Fix lockdown level A proposed rework of security_locked_down() users identified that the cxl_pci driver was passing the wrong lockdown_reason. Update cxl_mem_raw_command_allowed() to fail raw command access when raw pci access is also disabled. Fixes: 13237183c735 ("cxl/mem: Add a "RAW" send command") Cc: Ben Widawsky <ben.widawsky@intel.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: <stable@vger.kernel.org> Cc: Ondrej Mosnacek <omosnace@redhat.com> Cc: Paul Moore <paul@paul-moore.com> Link: https://lore.kernel.org/r/163072204525.2250120.16615792476976546735.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
ceeb0da0 |
|
17-Jun-2021 |
Ira Weiny <ira.weiny@intel.com> |
cxl/mem: Adjust ram/pmem range to represent DPA ranges CXL spec defines the volatile DPA range to be 0 to Volatile memory size. It further defines the persistent DPA range to follow directly after the end of the Volatile DPA through the persistent memory size. Essentially Volatile DPA range = [0, Volatile size) Persistent DPA range = [Volatile size, Volatile size + Persistent size) Adjust the pmem_range start to reflect this and remote the TODO. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210617221620.1904031-4-ira.weiny@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
f847502a |
|
10-Aug-2021 |
Ira Weiny <ira.weiny@intel.com> |
cxl/mem: Account for partitionable space in ram/pmem ranges Memory devices may specify volatile only, persistent only, and partitionable space which when added together result in a total capacity. If Identify Memory Device.Partition Alignment != 0 the device supports partitionable space. This partitionable space can be split between volatile and persistent space. The total volatile and persistent sizes are reported in Get Partition Info. ie active volatile memory = volatile only + partitionable volatile active persistent memory = persistent only + partitionable persistent Define cxl_mem_get_partition(), check for partitionable support, and use cxl_mem_get_partition() if applicable. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
0b9159d0 |
|
17-Jun-2021 |
Ira Weiny <ira.weiny@intel.com> |
cxl/pci: Store memory capacity values The Identify Memory Device command returns information about the volatile only and persistent only memory capacities. Store those values in the cxl_mem structure for later use. While at it, reuse those calculations to calculate the ram and pmem ranges. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210617221620.1904031-2-ira.weiny@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
5b68705d |
|
16-Jul-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Simplify register setup It is desirable to retain the mappings from the calling function. By simplifying this code, it will be much more straightforward to do that. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210716231548.174778-3-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
1e39db57 |
|
16-Jul-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Ignore unknown register block types In an effort to explicit avoid supporting vendor specific register blocks (which can happily be mapped from userspace), entirely skip probing unknown types. The secondary benefit of this will be revealed in the future with code simplification. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210716231548.174778-2-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
3d135db5 |
|
02-Aug-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/core: Move memdev management to core The motivation for moving cxl_memdev allocation to the core (beyond better file organization of sysfs attributes in core/ and drivers in cxl/), is that device lifetime is longer than module lifetime. The cxl_pci module should be free to come and go without needing to coordinate with devices that need the text associated with cxl_memdev_release() to stay resident. The move fixes a use after free bug when looping driver load / unload with CONFIG_DEBUG_KOBJECT_RELEASE=y. Another motivation for disconnecting cxl_memdev creation from cxl_pci is to enable other drivers, like a unit test driver, to registers memdevs. Fixes: b39cb1052a5c ("cxl/mem: Register CXL memX devices") Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/162792540495.368511.9748638751088219595.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
9cc238c7 |
|
02-Aug-2021 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pci: Introduce cdevm_file_operations In preparation for moving cxl_memdev allocation to the core, introduce cdevm_file_operations to coordinate file operations shutdown relative to driver data release. The motivation for moving cxl_memdev allocation to the core (beyond better file organization of sysfs attributes in core/ and drivers in cxl/), is that device lifetime is longer than module lifetime. The cxl_pci module should be free to come and go without needing to coordinate with devices that need the text associated with cxl_memdev_release() to stay resident. The move will fix a use after free bug when looping driver load / unload with CONFIG_DEBUG_KOBJECT_RELEASE=y. Another motivation for passing in file_operations to the core cxl_memdev creation flow is to allow for alternate drivers, like unit test code, to define their own ioctl backends. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/162792539962.368511.2962268954245340288.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
5161a55c |
|
02-Aug-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl: Move cxl_core to new directory CXL core is growing, and it's already arguably unmanageable. To support future growth, move core functionality to a new directory and rename the file to represent just bus support. Future work will remove non-bus functionality. Note that mem.h is renamed to cxlmem.h to avoid a namespace collision with the global ARCH=um mem.h header. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/162792537866.368511.8915631504621088321.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
4ad6181e |
|
17-Jun-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Rename CXL REGLOC ID The current naming is confusing and wrong. The Register Locator is identified by the DSVSEC identifier, not an offset. Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/20210618003009.956929-1-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
21083f51 |
|
15-Jun-2021 |
Dan Williams <dan.j.williams@intel.com> |
cxl/pmem: Register 'pmem' / cxl_nvdimm devices While a memX device on /sys/bus/cxl represents a CXL memory expander control interface, a pmemX device represents the persistent memory sub-functionality. It bridges the CXL subystem to the libnvdimm nmemX control interface. With this skeleton ndctl can now see persistent memory devices on a "CXL" bus. Later patches add support for translating libnvdimm native commands to CXL commands. # ndctl list -BDiu -b CXL { "provider":"CXL", "dev":"ndbus1", "dimms":[ { "dev":"nmem1", "state":"disabled" }, { "dev":"nmem0", "state":"disabled" } ] } Given nvdimm_bus_unregister() removes all devices on an ndbus0 the cxl_pmem infrastructure needs to arrange ->remove() to be triggered on cxl_nvdimm devices to keep their enabled state synchronized with the registration state of their corresponding device on the nvdimm_bus. In other words, always arrange for cxl_nvdimm_driver.remove() to unregister nvdimms from an nvdimm_bus ahead of the bus being unregistered. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/162380012696.3039556.4293801691038740850.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
87815ee9 |
|
13-Apr-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Add media provisioning required commands Some of the commands have already been defined for the support of RAW commands (to be blocked). Unlike their usage in the RAW interface, when used through the supported interface, they will be coordinated and marshalled along with other commands being issued by userspace and the driver itself. That coordination will be added later. The list of commands was determined based on the learnings from libnvdimm and this list is provided directly from Dan. Recommended-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210413140907.534404-1-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
08422378 |
|
27-May-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/pci: Add HDM decoder capabilities An HDM decoder is defined in the CXL 2.0 specification as a mechanism that allow devices and upstream ports to claim memory address ranges and participate in interleave sets. HDM decoder registers are within the component register block defined in CXL 2.0 8.2.3 CXL 2.0 Component Registers as part of the CXL.cache and CXL.mem subregion. The Component Register Block is found via the Register Locator DVSEC in a similar fashion to how the CXL Device Register Block is found. The primary difference is the capability id size of the Component Register Block is a single DWORD instead of 4 DWORDS. It's now possible to configure a CXL type 3 device's HDM decoder. Such programming is expected for CXL devices with persistent memory, and hot plugged CXL devices that participate in CXL.mem with volatile memory. Add probe and mapping functions for the component register blocks. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Co-developed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Co-developed-by: Vishal Verma <vishal.l.verma@intel.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/20210528004922.3980613-6-ira.weiny@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
9a016527 |
|
03-Jun-2021 |
Ira Weiny <ira.weiny@intel.com> |
cxl/pci: Reserve individual register block regions Some hardware implementations mix component and device registers into the same BAR and the driver stack is going to need independent mapping implementations for those 2 cases. Furthermore, it will be nice to have finer grained mappings should user space want to map some register blocks. Now that individual register blocks are mapped; those blocks regions should be reserved individually to fully separate the register blocks. Release the 'global' memory reservation and create individual register block region reservations through devm. NOTE: pci_release_mem_regions() is still compatible with pcim_enable_device() because it removes the automatic region release when called. So preserve the pcim_enable_device() so that the pcim interface can be called if needed. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/20210604005316.4187340-1-ira.weiny@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
30af9729 |
|
03-Jun-2021 |
Ira Weiny <ira.weiny@intel.com> |
cxl/pci: Map registers based on capabilities The information required to map registers based on capabilities is contained within the bars themselves. This means the bar must be mapped to read the information needed and then unmapped to map the individual parts of the BAR based on capabilities. Change cxl_setup_device_regs() to return a new cxl_register_map, change the name to cxl_probe_device_regs(). Allocate and place cxl_register_maps on a list while processing all of the specified register blocks. After probing all the register blocks go back and map smaller registers blocks based on their capabilities and dispose of the cxl_register_maps. NOTE: pci_iomap() is not managed automatically via pcim_enable_device() so be careful to call pci_iounmap() correctly. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/20210604005036.4187184-1-ira.weiny@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
f8a7e8c2 |
|
27-May-2021 |
Ira Weiny <ira.weiny@intel.com> |
cxl/pci: Reserve all device regions at once In order to remap individual register sets each bar region must be reserved prior to mapping. Because the details of individual register sets are contained within the BARs themselves, the bar must be mapped 2 times, once to extract this information and a second time for each register set. Rather than attempt to reserve each BAR individually and track if that bar has been reserved. Open code pcim_iomap_regions() by first reserving all memory regions on the device and then mapping the bars individually as needed. NOTE pci_request_mem_regions() does not need a corresponding pci_release_mem_regions() because the pci device is managed via pcim_enable_device(). Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/20210528004922.3980613-3-ira.weiny@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
07d62eac |
|
27-May-2021 |
Ira Weiny <ira.weiny@intel.com> |
cxl/pci: Introduce cxl_decode_register_block() Each register block located in the DVSEC needs to be decoded from 2 words, 'register offset high' and 'register offset low'. Create a function, cxl_decode_register_block() to perform this decode and return the bar, offset, and register type of the register block. Then use the values decoded in cxl_mem_map_regblock() instead of passing the raw registers. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/20210528004922.3980613-2-ira.weiny@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
6630d31c |
|
20-May-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/mem: Get rid of @cxlm.base @cxlm.base only existed to support holding the base found in the register block mapping code, and pass it along to the register setup code. Now that the register setup function has all logic around managing the registers, from DVSEC to iomapping up to populating our CXL specific information, it is easy to turn the @base values into local variables and remove them from our device driver state. Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/20210520212953.1181695-1-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
1d5a4159 |
|
07-Apr-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/mem: Move register locator logic into reg setup Start moving code around to ultimately get rid of @cxlm.base. The @cxlm.base member serves no purpose other than intermediate storage of the offset found in cxl_mem_map_regblock() later used by cxl_mem_setup_regs(). Aside from wanting to get rid of this useless member, it will help later when adding new register block identifiers. While @cxlm.base still exists, it will become trivial to remove it in a future patch. No functional change is meant to be introduced in this patch. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210407222625.320177-4-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
1b0a1a2a |
|
07-Apr-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/mem: Split creation from mapping in probe Add a new function specifically for mapping the register blocks and offsets within. The new function can be used more generically for other register block identifiers. No functional change is meant to be introduced in this patch with the exception of a dev_err printed when the device register block isn't found. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210407222625.320177-3-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
5d0c6f02 |
|
07-Apr-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/mem: Use dev instead of pdev->dev Trivial cleanup. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210407222625.320177-2-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
199cf8c3 |
|
20-May-2021 |
Vishal Verma <vishal.l.verma@intel.com> |
cxl/pci.c: Add a 'label_storage_size' attribute to the memdev The 'Identify Device' mailbox command returns an 'lsa_size', which is the size of the label storage area on the device. Export it as a sysfs attribute so that userspace tooling to read/write the LSA can determine the size without having to run an 'Identify Device' on their own. Cc: Ben Widawsky <ben.widawsky@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/r/20210520194745.1095517-1-vishal.l.verma@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
#
21e9f767 |
|
26-May-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl: Rename mem to pci As the driver has undergone development, it's become clear that the majority [entirety?] of the current functionality in mem.c is actually a layer encapsulating functionality exposed through PCI based interactions. This layer can be used either in isolation or to provide functionality for higher level functionality. CXL capabilities exist in a parallel domain to PCIe. CXL devices are enumerable and controllable via "legacy" PCIe mechanisms; however, their CXL capabilities are a superset of PCIe. For example, a CXL device may be connected to a non-CXL capable PCIe root port, and therefore will not be able to participate in CXL.mem or CXL.cache operations, but can still be accessed through PCIe mechanisms for CXL.io operations. To properly represent the PCI nature of this driver, and in preparation for introducing a new driver for the CXL.mem / HDM decoder (Host-managed Device Memory) capabilities of a CXL memory expander, rename mem.c to pci.c so that mem.c is available for this new driver. The result of the change is that there is a clear layering distinction in the driver, and a systems administrator may load only the cxl_pci module and gain access to such operations as, firmware update, offline provisioning of devices, and error collection. In addition to freeing up the file name for another purpose, there are two primary reasons this is useful, 1. Acting upon devices which don't have full CXL capabilities. This may happen for instance if the CXL device is connected in a CXL unaware part of the platform topology. 2. Userspace-first provisioning for devices without kernel driver interference. This may be useful when provisioning a new device in a specific manner that might otherwise be blocked or prevented by the real CXL mem driver. Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/20210526174413.802913-1-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|