Lines Matching defs:in

13  * VM within Hyper-V, there may seem to be no PCI bus at all in the VM
16 * Each root PCI bus has its own PCI domain, which is called "Segment" in
26 * underlying hypervisor to adjust the mappings in the I/O MMU so that each
29 * interrupts, and will report that the Interrupt Line register in the
37 * the PCI back-end driver in Hyper-V.
74 * Supported protocol versions in the order of probing - highest go
98 * should be generous in ensuring that we don't ever run out.
154 * which is all this driver does. This representation is the one used in
168 * Pretty much as defined in the PCI Specifications.
205 * @delivery_mode: As defined in Intel's Programmer's
207 * @vector_count: Number of contiguous entries in the
211 * in PCI 2.2, this can be between 1 and
212 * 32. For "MSI-X," as first defined in PCI
229 * @delivery_mode: As defined in Intel's Programmer's
231 * @vector_count: Number of contiguous entries in the
235 * in PCI 2.2, this can be between 1 and
236 * 32. For "MSI-X," as first defined in PCI
239 * @processor_count: number of bits enabled in array.
252 * Everything is the same as in 'hv_msi_desc2' except that the size of the
268 * @vector_count: same as in hv_msi_desc
286 * Specific message formats are defined later in the file.
524 * processed in order and deferred so that they don't run in the context
709 * into the irqdata data structure in migrate_one_irq() ->
831 * set to whatever is in the GIC configuration.
945 * @resp_packet_size: Size in bytes of the packet
1031 struct hv_mmio_read_input *in;
1040 in = *this_cpu_ptr(hyperv_pcpu_input_arg);
1041 out = *this_cpu_ptr(hyperv_pcpu_input_arg) + sizeof(*in);
1042 in->gpa = gpa;
1043 in->size = size;
1045 ret = hv_do_hypercall(HVCALL_MMIO_READ, in, out);
1065 struct hv_mmio_write_input *in;
1072 in = *this_cpu_ptr(hyperv_pcpu_input_arg);
1073 in->gpa = gpa;
1074 in->size = size;
1077 *(u8 *)(in->data) = val;
1080 *(u16 *)(in->data) = val;
1083 *(u32 *)(in->data) = val;
1087 ret = hv_do_hypercall(HVCALL_MMIO_WRITE, in, NULL);
1095 * of pages in memory-mapped I/O space. Writing to the first page chooses
1097 * written to, the following page maps in the entire configuration space of
1342 * Hyper-V SR-IOV provides a backchannel mechanism in software for
1344 * "configuration blocks" are similar in concept to PCI configuration space,
1345 * but instead of doing reads and writes in 32-bit chunks through a very slow
1348 * Nearly every SR-IOV device contains just such a communications channel in
1349 * hardware, so using this one in software is usually optional. Using the
1376 * @resp_packet_size: Size in bytes of the response packet
1407 * the back-end driver running in the Hyper-V parent partition.
1410 * @len: Size in bytes of buf.
1476 * @resp_packet_size: Size in bytes of the response packet
1489 * back-end driver running in the Hyper-V parent partition.
1492 * @len: Size in bytes of buf.
1533 * specified in write_blk->byte_count.
1619 * messages that are in use, keeping the interrupt redirection
1699 * Create MSI w/ dummy vCPU set, overwritten by subsequent retarget in
1710 * interrupted is specified later in hv_irq_unmask() and communicated to Hyper-V
1713 * interrupts based on the vCPU specified in message sent to the vPCI VSP in
1729 * With Hyper-V in Nov 2022, the HVCALL_RETARGET_INTERRUPT hypercall does *not*
1738 * by subsequent retarget in hv_irq_unmask().
1805 * @msg: Buffer that is filled in by this function
1809 * asking for a mapping for that tuple in this partition. The
1896 * value gets sent to the hypervisor in unmask(). This needs
1967 * in the tasklet.
1990 * in vmbus_reset_channel_cb().
2117 * Return: Size in bytes of the consumed MMIO space.
2153 * for a child device are a power of 2 in size and aligned in memory,
2160 "There's an I/O BAR in this list!\n");
2190 * prepopulate_bars() - Fill in BARs with defaults
2194 * for a device have values upon first scan. So fill them in.
2197 * enforced in other parts of the code, is that the beginning of
2229 * Clear the memory enable bit, in case it's already set. This occurs
2230 * in the suspend path of hibernation, where the device is suspended,
2291 * in the core PCI driver doesn't cause Hyper-V
2309 * Assign entries in sysfs pci slot directory.
2339 * Remove entries in sysfs pci slot directory.
2371 * (e.g. in a KDUMP kernel) or with NUMA disabled via
2421 * @resp_packet_size: The size in bytes of resp.
2548 * @work: Work struct embedded in struct hv_dr_work
2594 /* Throw this away if the list still has stuff in it. */
2815 * @work: Work struct embedded in internal device struct
2870 /* For the get_pcichild() in hv_pci_eject_device() */
2872 /* For the two refs got in new_pcichild_device() */
3081 * @version: Array of supported channel protocol versions in
3083 * @num_version: Number of elements in the version array.
3200 * in the kernel such that it comprehends either PCI devices
3202 * node (in this case, VMBus) or change it such that it
3211 * bridge windows. These descriptors have to exist in this form
3212 * in order to satisfy the code which will get invoked when the
3373 * devices to release resources allocated in the
3460 * used in local terms.) This is nice for Windows, and lines up
3461 * with the FDO/PDO split, which doesn't exist in Linux. Linux
3588 * Check if the PCI domain number is in use, and return another number if
3589 * it is in use.
3649 * The PCI bus "domain" is what is called "segment" in ACPI and other
3652 * not in use.
3654 * Note that, since this code only runs in a Hyper-V VM, Hyper-V
3659 * collisions) in the same VM.
3684 * ACPI companion in pcibios_root_bridge_prepare() and
3835 /* Remove all children in the list */
3840 /* For the two refs got in new_pcichild_device() */
3935 * before calling vmbus_close(), since it runs in a process context
3936 * as a callback in dpm_suspend(). When it starts to run, the channel
3937 * callback hv_pci_onchannelcallback(), which runs in a tasklet
3939 * items onto hbus->wq in hv_pci_devices_present() and
4027 /* Only use the version that was in use before hibernation. */