11820 lines
396 KiB
Diff
11820 lines
396 KiB
Diff
diff --git a/Documentation/ABI/testing/sysfs-devices-memory b/Documentation/ABI/testing/sysfs-devices-memory
|
|
index 246a45b96d22a..58dbc592bc57d 100644
|
|
--- a/Documentation/ABI/testing/sysfs-devices-memory
|
|
+++ b/Documentation/ABI/testing/sysfs-devices-memory
|
|
@@ -26,8 +26,9 @@ Date: September 2008
|
|
Contact: Badari Pulavarty <pbadari@us.ibm.com>
|
|
Description:
|
|
The file /sys/devices/system/memory/memoryX/phys_device
|
|
- is read-only and is designed to show the name of physical
|
|
- memory device. Implementation is currently incomplete.
|
|
+ is read-only; it is a legacy interface only ever used on s390x
|
|
+ to expose the covered storage increment.
|
|
+Users: Legacy s390-tools lsmem/chmem
|
|
|
|
What: /sys/devices/system/memory/memoryX/phys_index
|
|
Date: September 2008
|
|
diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst b/Documentation/admin-guide/mm/memory-hotplug.rst
|
|
index 5c4432c96c4b6..245739f55ac7d 100644
|
|
--- a/Documentation/admin-guide/mm/memory-hotplug.rst
|
|
+++ b/Documentation/admin-guide/mm/memory-hotplug.rst
|
|
@@ -160,8 +160,8 @@ Under each memory block, you can see 5 files:
|
|
|
|
"online_movable", "online", "offline" command
|
|
which will be performed on all sections in the block.
|
|
-``phys_device`` read-only: designed to show the name of physical memory
|
|
- device. This is not well implemented now.
|
|
+``phys_device`` read-only: legacy interface only ever used on s390x to
|
|
+ expose the covered storage increment.
|
|
``removable`` read-only: contains an integer value indicating
|
|
whether the memory block is removable or not
|
|
removable. A value of 1 indicates that the memory
|
|
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
|
|
index 009d8e6c7e3c3..1cb7b9f9356e7 100644
|
|
--- a/Documentation/gpu/todo.rst
|
|
+++ b/Documentation/gpu/todo.rst
|
|
@@ -594,6 +594,27 @@ Some of these date from the very introduction of KMS in 2008 ...
|
|
|
|
Level: Intermediate
|
|
|
|
+Remove automatic page mapping from dma-buf importing
|
|
+----------------------------------------------------
|
|
+
|
|
+When importing dma-bufs, the dma-buf and PRIME frameworks automatically map
|
|
+imported pages into the importer's DMA area. drm_gem_prime_fd_to_handle() and
|
|
+drm_gem_prime_handle_to_fd() require that importers call dma_buf_attach()
|
|
+even if they never do actual device DMA, but only CPU access through
|
|
+dma_buf_vmap(). This is a problem for USB devices, which do not support DMA
|
|
+operations.
|
|
+
|
|
+To fix the issue, automatic page mappings should be removed from the
|
|
+buffer-sharing code. Fixing this is a bit more involved, since the import/export
|
|
+cache is also tied to &drm_gem_object.import_attach. Meanwhile we paper over
|
|
+this problem for USB devices by fishing out the USB host controller device, as
|
|
+long as that supports DMA. Otherwise importing can still needlessly fail.
|
|
+
|
|
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
|
|
+
|
|
+Level: Advanced
|
|
+
|
|
+
|
|
Better Testing
|
|
==============
|
|
|
|
diff --git a/Documentation/networking/netdev-FAQ.rst b/Documentation/networking/netdev-FAQ.rst
|
|
index ae2ae37cd9216..a1a3fc7b2a4ee 100644
|
|
--- a/Documentation/networking/netdev-FAQ.rst
|
|
+++ b/Documentation/networking/netdev-FAQ.rst
|
|
@@ -142,73 +142,13 @@ Please send incremental versions on top of what has been merged in order to fix
|
|
the patches the way they would look like if your latest patch series was to be
|
|
merged.
|
|
|
|
-How can I tell what patches are queued up for backporting to the various stable releases?
|
|
------------------------------------------------------------------------------------------
|
|
-Normally Greg Kroah-Hartman collects stable commits himself, but for
|
|
-networking, Dave collects up patches he deems critical for the
|
|
-networking subsystem, and then hands them off to Greg.
|
|
-
|
|
-There is a patchworks queue that you can see here:
|
|
-
|
|
- https://patchwork.kernel.org/bundle/netdev/stable/?state=*
|
|
-
|
|
-It contains the patches which Dave has selected, but not yet handed off
|
|
-to Greg. If Greg already has the patch, then it will be here:
|
|
-
|
|
- https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
|
|
-
|
|
-A quick way to find whether the patch is in this stable-queue is to
|
|
-simply clone the repo, and then git grep the mainline commit ID, e.g.
|
|
-::
|
|
-
|
|
- stable-queue$ git grep -l 284041ef21fdf2e
|
|
- releases/3.0.84/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
|
|
- releases/3.4.51/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
|
|
- releases/3.9.8/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
|
|
- stable/stable-queue$
|
|
-
|
|
-I see a network patch and I think it should be backported to stable. Should I request it via stable@vger.kernel.org like the references in the kernel's Documentation/process/stable-kernel-rules.rst file say?
|
|
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
|
-No, not for networking. Check the stable queues as per above first
|
|
-to see if it is already queued. If not, then send a mail to netdev,
|
|
-listing the upstream commit ID and why you think it should be a stable
|
|
-candidate.
|
|
-
|
|
-Before you jump to go do the above, do note that the normal stable rules
|
|
-in :ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
|
|
-still apply. So you need to explicitly indicate why it is a critical
|
|
-fix and exactly what users are impacted. In addition, you need to
|
|
-convince yourself that you *really* think it has been overlooked,
|
|
-vs. having been considered and rejected.
|
|
-
|
|
-Generally speaking, the longer it has had a chance to "soak" in
|
|
-mainline, the better the odds that it is an OK candidate for stable. So
|
|
-scrambling to request a commit be added the day after it appears should
|
|
-be avoided.
|
|
-
|
|
-I have created a network patch and I think it should be backported to stable. Should I add a Cc: stable@vger.kernel.org like the references in the kernel's Documentation/ directory say?
|
|
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
|
-No. See above answer. In short, if you think it really belongs in
|
|
-stable, then ensure you write a decent commit log that describes who
|
|
-gets impacted by the bug fix and how it manifests itself, and when the
|
|
-bug was introduced. If you do that properly, then the commit will get
|
|
-handled appropriately and most likely get put in the patchworks stable
|
|
-queue if it really warrants it.
|
|
-
|
|
-If you think there is some valid information relating to it being in
|
|
-stable that does *not* belong in the commit log, then use the three dash
|
|
-marker line as described in
|
|
-:ref:`Documentation/process/submitting-patches.rst <the_canonical_patch_format>`
|
|
-to temporarily embed that information into the patch that you send.
|
|
-
|
|
-Are all networking bug fixes backported to all stable releases?
|
|
+Are there special rules regarding stable submissions on netdev?
|
|
---------------------------------------------------------------
|
|
-Due to capacity, Dave could only take care of the backports for the
|
|
-last two stable releases. For earlier stable releases, each stable
|
|
-branch maintainer is supposed to take care of them. If you find any
|
|
-patch is missing from an earlier stable branch, please notify
|
|
-stable@vger.kernel.org with either a commit ID or a formal patch
|
|
-backported, and CC Dave and other relevant networking developers.
|
|
+While it used to be the case that netdev submissions were not supposed
|
|
+to carry explicit ``CC: stable@vger.kernel.org`` tags that is no longer
|
|
+the case today. Please follow the standard stable rules in
|
|
+:ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`,
|
|
+and make sure you include appropriate Fixes tags!
|
|
|
|
Is the comment style convention different for the networking content?
|
|
---------------------------------------------------------------------
|
|
diff --git a/Documentation/process/stable-kernel-rules.rst b/Documentation/process/stable-kernel-rules.rst
|
|
index 3973556250e17..003c865e9c212 100644
|
|
--- a/Documentation/process/stable-kernel-rules.rst
|
|
+++ b/Documentation/process/stable-kernel-rules.rst
|
|
@@ -35,12 +35,6 @@ Rules on what kind of patches are accepted, and which ones are not, into the
|
|
Procedure for submitting patches to the -stable tree
|
|
----------------------------------------------------
|
|
|
|
- - If the patch covers files in net/ or drivers/net please follow netdev stable
|
|
- submission guidelines as described in
|
|
- :ref:`Documentation/networking/netdev-FAQ.rst <netdev-FAQ>`
|
|
- after first checking the stable networking queue at
|
|
- https://patchwork.kernel.org/bundle/netdev/stable/?state=*
|
|
- to ensure the requested patch is not already queued up.
|
|
- Security patches should not be handled (solely) by the -stable review
|
|
process but should follow the procedures in
|
|
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`.
|
|
diff --git a/Documentation/process/submitting-patches.rst b/Documentation/process/submitting-patches.rst
|
|
index 5ba54120bef7e..5a1b1ea3aed05 100644
|
|
--- a/Documentation/process/submitting-patches.rst
|
|
+++ b/Documentation/process/submitting-patches.rst
|
|
@@ -250,11 +250,6 @@ should also read
|
|
:ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
|
|
in addition to this file.
|
|
|
|
-Note, however, that some subsystem maintainers want to come to their own
|
|
-conclusions on which patches should go to the stable trees. The networking
|
|
-maintainer, in particular, would rather not see individual developers
|
|
-adding lines like the above to their patches.
|
|
-
|
|
If changes affect userland-kernel interfaces, please send the MAN-PAGES
|
|
maintainer (as listed in the MAINTAINERS file) a man-pages patch, or at
|
|
least a notification of the change, so that some information makes its way
|
|
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
|
|
index 99ceb978c8b08..5570887a2dce2 100644
|
|
--- a/Documentation/virt/kvm/api.rst
|
|
+++ b/Documentation/virt/kvm/api.rst
|
|
@@ -182,6 +182,9 @@ is dependent on the CPU capability and the kernel configuration. The limit can
|
|
be retrieved using KVM_CAP_ARM_VM_IPA_SIZE of the KVM_CHECK_EXTENSION
|
|
ioctl() at run-time.
|
|
|
|
+Creation of the VM will fail if the requested IPA size (whether it is
|
|
+implicit or explicit) is unsupported on the host.
|
|
+
|
|
Please note that configuring the IPA size does not affect the capability
|
|
exposed by the guest CPUs in ID_AA64MMFR0_EL1[PARange]. It only affects
|
|
size of the address translated by the stage2 level (guest physical to
|
|
diff --git a/Makefile b/Makefile
|
|
index 472136a7881e6..6ba32b82c4802 100644
|
|
--- a/Makefile
|
|
+++ b/Makefile
|
|
@@ -1,7 +1,7 @@
|
|
# SPDX-License-Identifier: GPL-2.0
|
|
VERSION = 5
|
|
PATCHLEVEL = 11
|
|
-SUBLEVEL = 6
|
|
+SUBLEVEL = 7
|
|
EXTRAVERSION =
|
|
NAME = 💕 Valentine's Day Edition 💕
|
|
|
|
@@ -1246,9 +1246,15 @@ define filechk_utsrelease.h
|
|
endef
|
|
|
|
define filechk_version.h
|
|
- echo \#define LINUX_VERSION_CODE $(shell \
|
|
- expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + 0$(SUBLEVEL)); \
|
|
- echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))'
|
|
+ if [ $(SUBLEVEL) -gt 255 ]; then \
|
|
+ echo \#define LINUX_VERSION_CODE $(shell \
|
|
+ expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + 255); \
|
|
+ else \
|
|
+ echo \#define LINUX_VERSION_CODE $(shell \
|
|
+ expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + $(SUBLEVEL)); \
|
|
+ fi; \
|
|
+ echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + \
|
|
+ ((c) > 255 ? 255 : (c)))'
|
|
endef
|
|
|
|
$(version_h): FORCE
|
|
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
|
|
index 8a33d83ea843a..3398477c891d5 100644
|
|
--- a/arch/arm64/include/asm/kvm_asm.h
|
|
+++ b/arch/arm64/include/asm/kvm_asm.h
|
|
@@ -47,7 +47,7 @@
|
|
#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context 2
|
|
#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa 3
|
|
#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid 4
|
|
-#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_local_vmid 5
|
|
+#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context 5
|
|
#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff 6
|
|
#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs 7
|
|
#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_ich_vtr_el2 8
|
|
@@ -183,10 +183,10 @@ DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs);
|
|
#define __bp_harden_hyp_vecs CHOOSE_HYP_SYM(__bp_harden_hyp_vecs)
|
|
|
|
extern void __kvm_flush_vm_context(void);
|
|
+extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu);
|
|
extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa,
|
|
int level);
|
|
extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu);
|
|
-extern void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu);
|
|
|
|
extern void __kvm_timer_set_cntvoff(u64 cntvoff);
|
|
|
|
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
|
|
index c0450828378b5..32ae676236b6b 100644
|
|
--- a/arch/arm64/include/asm/kvm_hyp.h
|
|
+++ b/arch/arm64/include/asm/kvm_hyp.h
|
|
@@ -83,6 +83,11 @@ void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt);
|
|
void __debug_switch_to_guest(struct kvm_vcpu *vcpu);
|
|
void __debug_switch_to_host(struct kvm_vcpu *vcpu);
|
|
|
|
+#ifdef __KVM_NVHE_HYPERVISOR__
|
|
+void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu);
|
|
+void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu);
|
|
+#endif
|
|
+
|
|
void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
|
|
void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
|
|
|
|
@@ -97,7 +102,8 @@ bool kvm_host_psci_handler(struct kvm_cpu_context *host_ctxt);
|
|
|
|
void __noreturn hyp_panic(void);
|
|
#ifdef __KVM_NVHE_HYPERVISOR__
|
|
-void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 par);
|
|
+void __noreturn __hyp_do_panic(struct kvm_cpu_context *host_ctxt, u64 spsr,
|
|
+ u64 elr, u64 par);
|
|
#endif
|
|
|
|
#endif /* __ARM64_KVM_HYP_H__ */
|
|
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
|
|
index ff4732785c32f..63b6ef2cfb52c 100644
|
|
--- a/arch/arm64/include/asm/memory.h
|
|
+++ b/arch/arm64/include/asm/memory.h
|
|
@@ -315,6 +315,11 @@ static inline void *phys_to_virt(phys_addr_t x)
|
|
#define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET)
|
|
|
|
#if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
|
|
+#define page_to_virt(x) ({ \
|
|
+ __typeof__(x) __page = x; \
|
|
+ void *__addr = __va(page_to_phys(__page)); \
|
|
+ (void *)__tag_set((const void *)__addr, page_kasan_tag(__page));\
|
|
+})
|
|
#define virt_to_page(x) pfn_to_page(virt_to_pfn(x))
|
|
#else
|
|
#define page_to_virt(x) ({ \
|
|
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
|
|
index 0b3079fd28ebe..1c364ec0ad318 100644
|
|
--- a/arch/arm64/include/asm/mmu_context.h
|
|
+++ b/arch/arm64/include/asm/mmu_context.h
|
|
@@ -65,10 +65,7 @@ extern u64 idmap_ptrs_per_pgd;
|
|
|
|
static inline bool __cpu_uses_extended_idmap(void)
|
|
{
|
|
- if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52))
|
|
- return false;
|
|
-
|
|
- return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS));
|
|
+ return unlikely(idmap_t0sz != TCR_T0SZ(vabits_actual));
|
|
}
|
|
|
|
/*
|
|
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
|
|
index 046be789fbb47..9a65fb5281100 100644
|
|
--- a/arch/arm64/include/asm/pgtable-prot.h
|
|
+++ b/arch/arm64/include/asm/pgtable-prot.h
|
|
@@ -66,7 +66,6 @@ extern bool arm64_use_ng_mappings;
|
|
#define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
|
|
|
|
#define PAGE_KERNEL __pgprot(PROT_NORMAL)
|
|
-#define PAGE_KERNEL_TAGGED __pgprot(PROT_NORMAL_TAGGED)
|
|
#define PAGE_KERNEL_RO __pgprot((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY)
|
|
#define PAGE_KERNEL_ROX __pgprot((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY)
|
|
#define PAGE_KERNEL_EXEC __pgprot(PROT_NORMAL & ~PTE_PXN)
|
|
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
|
|
index 501562793ce26..a5215d16a0f48 100644
|
|
--- a/arch/arm64/include/asm/pgtable.h
|
|
+++ b/arch/arm64/include/asm/pgtable.h
|
|
@@ -486,6 +486,9 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
|
|
__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC) | PTE_PXN | PTE_UXN)
|
|
#define pgprot_device(prot) \
|
|
__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_nGnRE) | PTE_PXN | PTE_UXN)
|
|
+#define pgprot_tagged(prot) \
|
|
+ __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_TAGGED))
|
|
+#define pgprot_mhp pgprot_tagged
|
|
/*
|
|
* DMA allocations for non-coherent devices use what the Arm architecture calls
|
|
* "Normal non-cacheable" memory, which permits speculation, unaligned accesses
|
|
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
|
|
index 7ec430e18f95e..a0b3bfe676096 100644
|
|
--- a/arch/arm64/kernel/head.S
|
|
+++ b/arch/arm64/kernel/head.S
|
|
@@ -319,7 +319,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
|
|
*/
|
|
adrp x5, __idmap_text_end
|
|
clz x5, x5
|
|
- cmp x5, TCR_T0SZ(VA_BITS) // default T0SZ small enough?
|
|
+ cmp x5, TCR_T0SZ(VA_BITS_MIN) // default T0SZ small enough?
|
|
b.ge 1f // .. then skip VA range extension
|
|
|
|
adr_l x6, idmap_t0sz
|
|
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
|
|
index 3605f77ad4df1..11852e05ee32a 100644
|
|
--- a/arch/arm64/kernel/perf_event.c
|
|
+++ b/arch/arm64/kernel/perf_event.c
|
|
@@ -460,7 +460,7 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
|
|
return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx));
|
|
}
|
|
|
|
-static inline u32 armv8pmu_read_evcntr(int idx)
|
|
+static inline u64 armv8pmu_read_evcntr(int idx)
|
|
{
|
|
u32 counter = ARMV8_IDX_TO_COUNTER(idx);
|
|
|
|
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
|
|
index fe60d25c000e4..b25b4c19feebc 100644
|
|
--- a/arch/arm64/kvm/arm.c
|
|
+++ b/arch/arm64/kvm/arm.c
|
|
@@ -385,11 +385,16 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
|
|
last_ran = this_cpu_ptr(mmu->last_vcpu_ran);
|
|
|
|
/*
|
|
+ * We guarantee that both TLBs and I-cache are private to each
|
|
+ * vcpu. If detecting that a vcpu from the same VM has
|
|
+ * previously run on the same physical CPU, call into the
|
|
+ * hypervisor code to nuke the relevant contexts.
|
|
+ *
|
|
* We might get preempted before the vCPU actually runs, but
|
|
* over-invalidation doesn't affect correctness.
|
|
*/
|
|
if (*last_ran != vcpu->vcpu_id) {
|
|
- kvm_call_hyp(__kvm_tlb_flush_local_vmid, mmu);
|
|
+ kvm_call_hyp(__kvm_flush_cpu_context, mmu);
|
|
*last_ran = vcpu->vcpu_id;
|
|
}
|
|
|
|
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
|
|
index b0afad7a99c6e..0c66a1d408fd7 100644
|
|
--- a/arch/arm64/kvm/hyp/entry.S
|
|
+++ b/arch/arm64/kvm/hyp/entry.S
|
|
@@ -146,7 +146,7 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
|
|
// Now restore the hyp regs
|
|
restore_callee_saved_regs x2
|
|
|
|
- set_loaded_vcpu xzr, x1, x2
|
|
+ set_loaded_vcpu xzr, x2, x3
|
|
|
|
alternative_if ARM64_HAS_RAS_EXTN
|
|
// If we have the RAS extensions we can consume a pending error
|
|
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
|
|
index 91a711aa8382e..f401724f12ef7 100644
|
|
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
|
|
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
|
|
@@ -58,16 +58,24 @@ static void __debug_restore_spe(u64 pmscr_el1)
|
|
write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1);
|
|
}
|
|
|
|
-void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
|
|
+void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
|
|
{
|
|
/* Disable and flush SPE data generation */
|
|
__debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1);
|
|
+}
|
|
+
|
|
+void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
|
|
+{
|
|
__debug_switch_to_guest_common(vcpu);
|
|
}
|
|
|
|
-void __debug_switch_to_host(struct kvm_vcpu *vcpu)
|
|
+void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
|
|
{
|
|
__debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
|
|
+}
|
|
+
|
|
+void __debug_switch_to_host(struct kvm_vcpu *vcpu)
|
|
+{
|
|
__debug_switch_to_host_common(vcpu);
|
|
}
|
|
|
|
diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
|
|
index a820dfdc9c25d..3a06085aab6f1 100644
|
|
--- a/arch/arm64/kvm/hyp/nvhe/host.S
|
|
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
|
|
@@ -71,10 +71,15 @@ SYM_FUNC_START(__host_enter)
|
|
SYM_FUNC_END(__host_enter)
|
|
|
|
/*
|
|
- * void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 par);
|
|
+ * void __noreturn __hyp_do_panic(struct kvm_cpu_context *host_ctxt, u64 spsr,
|
|
+ * u64 elr, u64 par);
|
|
*/
|
|
SYM_FUNC_START(__hyp_do_panic)
|
|
- /* Load the format arguments into x1-7 */
|
|
+ mov x29, x0
|
|
+
|
|
+ /* Load the format string into x0 and arguments into x1-7 */
|
|
+ ldr x0, =__hyp_panic_string
|
|
+
|
|
mov x6, x3
|
|
get_vcpu_ptr x7, x3
|
|
|
|
@@ -89,13 +94,8 @@ SYM_FUNC_START(__hyp_do_panic)
|
|
ldr lr, =panic
|
|
msr elr_el2, lr
|
|
|
|
- /*
|
|
- * Set the panic format string and enter the host, conditionally
|
|
- * restoring the host context.
|
|
- */
|
|
- cmp x0, xzr
|
|
- ldr x0, =__hyp_panic_string
|
|
- b.eq __host_enter_without_restoring
|
|
+ /* Enter the host, conditionally restoring the host context. */
|
|
+ cbz x29, __host_enter_without_restoring
|
|
b __host_enter_for_panic
|
|
SYM_FUNC_END(__hyp_do_panic)
|
|
|
|
@@ -150,7 +150,7 @@ SYM_FUNC_END(__hyp_do_panic)
|
|
|
|
.macro invalid_host_el1_vect
|
|
.align 7
|
|
- mov x0, xzr /* restore_host = false */
|
|
+ mov x0, xzr /* host_ctxt = NULL */
|
|
mrs x1, spsr_el2
|
|
mrs x2, elr_el2
|
|
mrs x3, par_el1
|
|
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
|
|
index a906f9e2ff34f..1b8ef37bf8054 100644
|
|
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
|
|
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
|
|
@@ -46,11 +46,11 @@ static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt)
|
|
__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
|
|
}
|
|
|
|
-static void handle___kvm_tlb_flush_local_vmid(struct kvm_cpu_context *host_ctxt)
|
|
+static void handle___kvm_flush_cpu_context(struct kvm_cpu_context *host_ctxt)
|
|
{
|
|
DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
|
|
|
|
- __kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
|
|
+ __kvm_flush_cpu_context(kern_hyp_va(mmu));
|
|
}
|
|
|
|
static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt)
|
|
@@ -115,7 +115,7 @@ static const hcall_t *host_hcall[] = {
|
|
HANDLE_FUNC(__kvm_flush_vm_context),
|
|
HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
|
|
HANDLE_FUNC(__kvm_tlb_flush_vmid),
|
|
- HANDLE_FUNC(__kvm_tlb_flush_local_vmid),
|
|
+ HANDLE_FUNC(__kvm_flush_cpu_context),
|
|
HANDLE_FUNC(__kvm_timer_set_cntvoff),
|
|
HANDLE_FUNC(__kvm_enable_ssbs),
|
|
HANDLE_FUNC(__vgic_v3_get_ich_vtr_el2),
|
|
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
|
|
index f3d0e9eca56cd..68ab6b4d51414 100644
|
|
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
|
|
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
|
|
@@ -192,6 +192,14 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
|
|
pmu_switch_needed = __pmu_switch_to_guest(host_ctxt);
|
|
|
|
__sysreg_save_state_nvhe(host_ctxt);
|
|
+ /*
|
|
+ * We must flush and disable the SPE buffer for nVHE, as
|
|
+ * the translation regime(EL1&0) is going to be loaded with
|
|
+ * that of the guest. And we must do this before we change the
|
|
+ * translation regime to EL2 (via MDCR_EL2_E2PB == 0) and
|
|
+ * before we load guest Stage1.
|
|
+ */
|
|
+ __debug_save_host_buffers_nvhe(vcpu);
|
|
|
|
__adjust_pc(vcpu);
|
|
|
|
@@ -234,11 +242,12 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
|
|
if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
|
|
__fpsimd_save_fpexc32(vcpu);
|
|
|
|
+ __debug_switch_to_host(vcpu);
|
|
/*
|
|
* This must come after restoring the host sysregs, since a non-VHE
|
|
* system may enable SPE here and make use of the TTBRs.
|
|
*/
|
|
- __debug_switch_to_host(vcpu);
|
|
+ __debug_restore_host_buffers_nvhe(vcpu);
|
|
|
|
if (pmu_switch_needed)
|
|
__pmu_switch_to_host(host_ctxt);
|
|
@@ -257,7 +266,6 @@ void __noreturn hyp_panic(void)
|
|
u64 spsr = read_sysreg_el2(SYS_SPSR);
|
|
u64 elr = read_sysreg_el2(SYS_ELR);
|
|
u64 par = read_sysreg_par();
|
|
- bool restore_host = true;
|
|
struct kvm_cpu_context *host_ctxt;
|
|
struct kvm_vcpu *vcpu;
|
|
|
|
@@ -271,7 +279,7 @@ void __noreturn hyp_panic(void)
|
|
__sysreg_restore_state_nvhe(host_ctxt);
|
|
}
|
|
|
|
- __hyp_do_panic(restore_host, spsr, elr, par);
|
|
+ __hyp_do_panic(host_ctxt, spsr, elr, par);
|
|
unreachable();
|
|
}
|
|
|
|
diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
|
|
index fbde89a2c6e83..229b06748c208 100644
|
|
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
|
|
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
|
|
@@ -123,7 +123,7 @@ void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu)
|
|
__tlb_switch_to_host(&cxt);
|
|
}
|
|
|
|
-void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
|
|
+void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
|
|
{
|
|
struct tlb_inv_context cxt;
|
|
|
|
@@ -131,6 +131,7 @@ void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
|
|
__tlb_switch_to_guest(mmu, &cxt);
|
|
|
|
__tlbi(vmalle1);
|
|
+ asm volatile("ic iallu");
|
|
dsb(nsh);
|
|
isb();
|
|
|
|
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
|
|
index bdf8e55ed308e..4d99d07c610c8 100644
|
|
--- a/arch/arm64/kvm/hyp/pgtable.c
|
|
+++ b/arch/arm64/kvm/hyp/pgtable.c
|
|
@@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
|
|
goto out;
|
|
|
|
if (!table) {
|
|
+ data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
|
|
data->addr += kvm_granule_size(level);
|
|
goto out;
|
|
}
|
|
diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c
|
|
index fd7895945bbc6..66f17349f0c36 100644
|
|
--- a/arch/arm64/kvm/hyp/vhe/tlb.c
|
|
+++ b/arch/arm64/kvm/hyp/vhe/tlb.c
|
|
@@ -127,7 +127,7 @@ void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu)
|
|
__tlb_switch_to_host(&cxt);
|
|
}
|
|
|
|
-void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
|
|
+void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
|
|
{
|
|
struct tlb_inv_context cxt;
|
|
|
|
@@ -135,6 +135,7 @@ void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
|
|
__tlb_switch_to_guest(mmu, &cxt);
|
|
|
|
__tlbi(vmalle1);
|
|
+ asm volatile("ic iallu");
|
|
dsb(nsh);
|
|
isb();
|
|
|
|
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
|
|
index 7d2257cc54387..eebde5eb6c3d0 100644
|
|
--- a/arch/arm64/kvm/mmu.c
|
|
+++ b/arch/arm64/kvm/mmu.c
|
|
@@ -1309,8 +1309,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
|
|
* Prevent userspace from creating a memory region outside of the IPA
|
|
* space addressable by the KVM guest IPA space.
|
|
*/
|
|
- if (memslot->base_gfn + memslot->npages >=
|
|
- (kvm_phys_size(kvm) >> PAGE_SHIFT))
|
|
+ if ((memslot->base_gfn + memslot->npages) > (kvm_phys_size(kvm) >> PAGE_SHIFT))
|
|
return -EFAULT;
|
|
|
|
mmap_read_lock(current->mm);
|
|
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
|
|
index 47f3f035f3eac..9d3d09a898945 100644
|
|
--- a/arch/arm64/kvm/reset.c
|
|
+++ b/arch/arm64/kvm/reset.c
|
|
@@ -324,10 +324,9 @@ int kvm_set_ipa_limit(void)
|
|
}
|
|
|
|
kvm_ipa_limit = id_aa64mmfr0_parange_to_phys_shift(parange);
|
|
- WARN(kvm_ipa_limit < KVM_PHYS_SHIFT,
|
|
- "KVM IPA Size Limit (%d bits) is smaller than default size\n",
|
|
- kvm_ipa_limit);
|
|
- kvm_info("IPA Size Limit: %d bits\n", kvm_ipa_limit);
|
|
+ kvm_info("IPA Size Limit: %d bits%s\n", kvm_ipa_limit,
|
|
+ ((kvm_ipa_limit < KVM_PHYS_SHIFT) ?
|
|
+ " (Reduced IPA size, limited VM/VMM compatibility)" : ""));
|
|
|
|
return 0;
|
|
}
|
|
@@ -356,6 +355,11 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
|
|
return -EINVAL;
|
|
} else {
|
|
phys_shift = KVM_PHYS_SHIFT;
|
|
+ if (phys_shift > kvm_ipa_limit) {
|
|
+ pr_warn_once("%s using unsupported default IPA limit, upgrade your VMM\n",
|
|
+ current->comm);
|
|
+ return -EINVAL;
|
|
+ }
|
|
}
|
|
|
|
mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
|
|
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
|
|
index 709d98fea90cc..1141075e4d53c 100644
|
|
--- a/arch/arm64/mm/init.c
|
|
+++ b/arch/arm64/mm/init.c
|
|
@@ -230,6 +230,18 @@ int pfn_valid(unsigned long pfn)
|
|
|
|
if (!valid_section(__pfn_to_section(pfn)))
|
|
return 0;
|
|
+
|
|
+ /*
|
|
+ * ZONE_DEVICE memory does not have the memblock entries.
|
|
+ * memblock_is_map_memory() check for ZONE_DEVICE based
|
|
+ * addresses will always fail. Even the normal hotplugged
|
|
+ * memory will never have MEMBLOCK_NOMAP flag set in their
|
|
+ * memblock entries. Skip memblock search for all non early
|
|
+ * memory sections covering all of hotplug memory including
|
|
+ * both normal and ZONE_DEVICE based.
|
|
+ */
|
|
+ if (!early_section(__pfn_to_section(pfn)))
|
|
+ return pfn_section_valid(__pfn_to_section(pfn), pfn);
|
|
#endif
|
|
return memblock_is_map_memory(addr);
|
|
}
|
|
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
|
|
index ae0c3d023824e..6f0648777d347 100644
|
|
--- a/arch/arm64/mm/mmu.c
|
|
+++ b/arch/arm64/mm/mmu.c
|
|
@@ -40,7 +40,7 @@
|
|
#define NO_BLOCK_MAPPINGS BIT(0)
|
|
#define NO_CONT_MAPPINGS BIT(1)
|
|
|
|
-u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
|
|
+u64 idmap_t0sz = TCR_T0SZ(VA_BITS_MIN);
|
|
u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
|
|
|
|
u64 __section(".mmuoff.data.write") vabits_actual;
|
|
@@ -512,7 +512,8 @@ static void __init map_mem(pgd_t *pgdp)
|
|
* if MTE is present. Otherwise, it has the same attributes as
|
|
* PAGE_KERNEL.
|
|
*/
|
|
- __map_memblock(pgdp, start, end, PAGE_KERNEL_TAGGED, flags);
|
|
+ __map_memblock(pgdp, start, end, pgprot_tagged(PAGE_KERNEL),
|
|
+ flags);
|
|
}
|
|
|
|
/*
|
|
diff --git a/arch/mips/crypto/Makefile b/arch/mips/crypto/Makefile
|
|
index 8e1deaf00e0c0..5e4105cccf9fa 100644
|
|
--- a/arch/mips/crypto/Makefile
|
|
+++ b/arch/mips/crypto/Makefile
|
|
@@ -12,8 +12,8 @@ AFLAGS_chacha-core.o += -O2 # needed to fill branch delay slots
|
|
obj-$(CONFIG_CRYPTO_POLY1305_MIPS) += poly1305-mips.o
|
|
poly1305-mips-y := poly1305-core.o poly1305-glue.o
|
|
|
|
-perlasm-flavour-$(CONFIG_CPU_MIPS32) := o32
|
|
-perlasm-flavour-$(CONFIG_CPU_MIPS64) := 64
|
|
+perlasm-flavour-$(CONFIG_32BIT) := o32
|
|
+perlasm-flavour-$(CONFIG_64BIT) := 64
|
|
|
|
quiet_cmd_perlasm = PERLASM $@
|
|
cmd_perlasm = $(PERL) $(<) $(perlasm-flavour-y) $(@)
|
|
diff --git a/arch/mips/include/asm/traps.h b/arch/mips/include/asm/traps.h
|
|
index 6a0864bb604dc..9038b91e2d8c3 100644
|
|
--- a/arch/mips/include/asm/traps.h
|
|
+++ b/arch/mips/include/asm/traps.h
|
|
@@ -24,6 +24,9 @@ extern void (*board_ebase_setup)(void);
|
|
extern void (*board_cache_error_setup)(void);
|
|
|
|
extern int register_nmi_notifier(struct notifier_block *nb);
|
|
+extern void reserve_exception_space(phys_addr_t addr, unsigned long size);
|
|
+
|
|
+#define VECTORSPACING 0x100 /* for EI/VI mode */
|
|
|
|
#define nmi_notifier(fn, pri) \
|
|
({ \
|
|
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
|
|
index 31cb9199197ca..21794db53c05a 100644
|
|
--- a/arch/mips/kernel/cpu-probe.c
|
|
+++ b/arch/mips/kernel/cpu-probe.c
|
|
@@ -26,6 +26,7 @@
|
|
#include <asm/elf.h>
|
|
#include <asm/pgtable-bits.h>
|
|
#include <asm/spram.h>
|
|
+#include <asm/traps.h>
|
|
#include <linux/uaccess.h>
|
|
|
|
#include "fpu-probe.h"
|
|
@@ -1619,6 +1620,7 @@ static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu)
|
|
c->cputype = CPU_BMIPS3300;
|
|
__cpu_name[cpu] = "Broadcom BMIPS3300";
|
|
set_elf_platform(cpu, "bmips3300");
|
|
+ reserve_exception_space(0x400, VECTORSPACING * 64);
|
|
break;
|
|
case PRID_IMP_BMIPS43XX: {
|
|
int rev = c->processor_id & PRID_REV_MASK;
|
|
@@ -1629,6 +1631,7 @@ static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu)
|
|
__cpu_name[cpu] = "Broadcom BMIPS4380";
|
|
set_elf_platform(cpu, "bmips4380");
|
|
c->options |= MIPS_CPU_RIXI;
|
|
+ reserve_exception_space(0x400, VECTORSPACING * 64);
|
|
} else {
|
|
c->cputype = CPU_BMIPS4350;
|
|
__cpu_name[cpu] = "Broadcom BMIPS4350";
|
|
@@ -1645,6 +1648,7 @@ static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu)
|
|
__cpu_name[cpu] = "Broadcom BMIPS5000";
|
|
set_elf_platform(cpu, "bmips5000");
|
|
c->options |= MIPS_CPU_ULRI | MIPS_CPU_RIXI;
|
|
+ reserve_exception_space(0x1000, VECTORSPACING * 64);
|
|
break;
|
|
}
|
|
}
|
|
@@ -2124,6 +2128,8 @@ void cpu_probe(void)
|
|
if (cpu == 0)
|
|
__ua_limit = ~((1ull << cpu_vmbits) - 1);
|
|
#endif
|
|
+
|
|
+ reserve_exception_space(0, 0x1000);
|
|
}
|
|
|
|
void cpu_report(void)
|
|
diff --git a/arch/mips/kernel/cpu-r3k-probe.c b/arch/mips/kernel/cpu-r3k-probe.c
|
|
index abdbbe8c5a43a..af654771918cd 100644
|
|
--- a/arch/mips/kernel/cpu-r3k-probe.c
|
|
+++ b/arch/mips/kernel/cpu-r3k-probe.c
|
|
@@ -21,6 +21,7 @@
|
|
#include <asm/fpu.h>
|
|
#include <asm/mipsregs.h>
|
|
#include <asm/elf.h>
|
|
+#include <asm/traps.h>
|
|
|
|
#include "fpu-probe.h"
|
|
|
|
@@ -158,6 +159,8 @@ void cpu_probe(void)
|
|
cpu_set_fpu_opts(c);
|
|
else
|
|
cpu_set_nofpu_opts(c);
|
|
+
|
|
+ reserve_exception_space(0, 0x400);
|
|
}
|
|
|
|
void cpu_report(void)
|
|
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
|
|
index e0352958e2f72..808b8b61ded15 100644
|
|
--- a/arch/mips/kernel/traps.c
|
|
+++ b/arch/mips/kernel/traps.c
|
|
@@ -2009,13 +2009,16 @@ void __noreturn nmi_exception_handler(struct pt_regs *regs)
|
|
nmi_exit();
|
|
}
|
|
|
|
-#define VECTORSPACING 0x100 /* for EI/VI mode */
|
|
-
|
|
unsigned long ebase;
|
|
EXPORT_SYMBOL_GPL(ebase);
|
|
unsigned long exception_handlers[32];
|
|
unsigned long vi_handlers[64];
|
|
|
|
+void reserve_exception_space(phys_addr_t addr, unsigned long size)
|
|
+{
|
|
+ memblock_reserve(addr, size);
|
|
+}
|
|
+
|
|
void __init *set_except_vector(int n, void *addr)
|
|
{
|
|
unsigned long handler = (unsigned long) addr;
|
|
@@ -2367,10 +2370,7 @@ void __init trap_init(void)
|
|
|
|
if (!cpu_has_mips_r2_r6) {
|
|
ebase = CAC_BASE;
|
|
- ebase_pa = virt_to_phys((void *)ebase);
|
|
vec_size = 0x400;
|
|
-
|
|
- memblock_reserve(ebase_pa, vec_size);
|
|
} else {
|
|
if (cpu_has_veic || cpu_has_vint)
|
|
vec_size = 0x200 + VECTORSPACING*64;
|
|
diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
|
|
index eacc9102c2515..d5b3c3bb95b40 100644
|
|
--- a/arch/powerpc/include/asm/code-patching.h
|
|
+++ b/arch/powerpc/include/asm/code-patching.h
|
|
@@ -73,7 +73,7 @@ void __patch_exception(int exc, unsigned long addr);
|
|
#endif
|
|
|
|
#define OP_RT_RA_MASK 0xffff0000UL
|
|
-#define LIS_R2 0x3c020000UL
|
|
+#define LIS_R2 0x3c400000UL
|
|
#define ADDIS_R2_R12 0x3c4c0000UL
|
|
#define ADDI_R2_R2 0x38420000UL
|
|
|
|
diff --git a/arch/powerpc/include/asm/machdep.h b/arch/powerpc/include/asm/machdep.h
|
|
index cf6ebbc16cb47..764f2732a8218 100644
|
|
--- a/arch/powerpc/include/asm/machdep.h
|
|
+++ b/arch/powerpc/include/asm/machdep.h
|
|
@@ -59,6 +59,9 @@ struct machdep_calls {
|
|
int (*pcibios_root_bridge_prepare)(struct pci_host_bridge
|
|
*bridge);
|
|
|
|
+ /* finds all the pci_controllers present at boot */
|
|
+ void (*discover_phbs)(void);
|
|
+
|
|
/* To setup PHBs when using automatic OF platform driver for PCI */
|
|
int (*pci_setup_phb)(struct pci_controller *host);
|
|
|
|
diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
|
|
index 58f9dc060a7b4..42e9bc4018da4 100644
|
|
--- a/arch/powerpc/include/asm/ptrace.h
|
|
+++ b/arch/powerpc/include/asm/ptrace.h
|
|
@@ -70,6 +70,9 @@ struct pt_regs
|
|
};
|
|
#endif
|
|
|
|
+
|
|
+#define STACK_FRAME_WITH_PT_REGS (STACK_FRAME_OVERHEAD + sizeof(struct pt_regs))
|
|
+
|
|
#ifdef __powerpc64__
|
|
|
|
/*
|
|
@@ -192,7 +195,7 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
|
|
#define TRAP_FLAGS_MASK 0x11
|
|
#define TRAP(regs) ((regs)->trap & ~TRAP_FLAGS_MASK)
|
|
#define FULL_REGS(regs) (((regs)->trap & 1) == 0)
|
|
-#define SET_FULL_REGS(regs) ((regs)->trap |= 1)
|
|
+#define SET_FULL_REGS(regs) ((regs)->trap &= ~1)
|
|
#endif
|
|
#define CHECK_FULL_REGS(regs) BUG_ON(!FULL_REGS(regs))
|
|
#define NV_REG_POISON 0xdeadbeefdeadbeefUL
|
|
@@ -207,7 +210,7 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
|
|
#define TRAP_FLAGS_MASK 0x1F
|
|
#define TRAP(regs) ((regs)->trap & ~TRAP_FLAGS_MASK)
|
|
#define FULL_REGS(regs) (((regs)->trap & 1) == 0)
|
|
-#define SET_FULL_REGS(regs) ((regs)->trap |= 1)
|
|
+#define SET_FULL_REGS(regs) ((regs)->trap &= ~1)
|
|
#define IS_CRITICAL_EXC(regs) (((regs)->trap & 2) != 0)
|
|
#define IS_MCHECK_EXC(regs) (((regs)->trap & 4) != 0)
|
|
#define IS_DEBUG_EXC(regs) (((regs)->trap & 8) != 0)
|
|
diff --git a/arch/powerpc/include/asm/switch_to.h b/arch/powerpc/include/asm/switch_to.h
|
|
index fdab934283721..9d1fbd8be1c74 100644
|
|
--- a/arch/powerpc/include/asm/switch_to.h
|
|
+++ b/arch/powerpc/include/asm/switch_to.h
|
|
@@ -71,6 +71,16 @@ static inline void disable_kernel_vsx(void)
|
|
{
|
|
msr_check_and_clear(MSR_FP|MSR_VEC|MSR_VSX);
|
|
}
|
|
+#else
|
|
+static inline void enable_kernel_vsx(void)
|
|
+{
|
|
+ BUILD_BUG();
|
|
+}
|
|
+
|
|
+static inline void disable_kernel_vsx(void)
|
|
+{
|
|
+ BUILD_BUG();
|
|
+}
|
|
#endif
|
|
|
|
#ifdef CONFIG_SPE
|
|
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
|
|
index b12d7c049bfe2..989006b5ad0ff 100644
|
|
--- a/arch/powerpc/kernel/asm-offsets.c
|
|
+++ b/arch/powerpc/kernel/asm-offsets.c
|
|
@@ -309,7 +309,7 @@ int main(void)
|
|
|
|
/* Interrupt register frame */
|
|
DEFINE(INT_FRAME_SIZE, STACK_INT_FRAME_SIZE);
|
|
- DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs));
|
|
+ DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_WITH_PT_REGS);
|
|
STACK_PT_REGS_OFFSET(GPR0, gpr[0]);
|
|
STACK_PT_REGS_OFFSET(GPR1, gpr[1]);
|
|
STACK_PT_REGS_OFFSET(GPR2, gpr[2]);
|
|
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
|
|
index 6e53f76387374..de988770a7e4e 100644
|
|
--- a/arch/powerpc/kernel/exceptions-64s.S
|
|
+++ b/arch/powerpc/kernel/exceptions-64s.S
|
|
@@ -470,7 +470,7 @@ DEFINE_FIXED_SYMBOL(\name\()_common_real)
|
|
|
|
ld r10,PACAKMSR(r13) /* get MSR value for kernel */
|
|
/* MSR[RI] is clear iff using SRR regs */
|
|
- .if IHSRR == EXC_HV_OR_STD
|
|
+ .if IHSRR_IF_HVMODE
|
|
BEGIN_FTR_SECTION
|
|
xori r10,r10,MSR_RI
|
|
END_FTR_SECTION_IFCLR(CPU_FTR_HVMODE)
|
|
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
|
|
index bc57e3a82d689..b1a1a928fcb8b 100644
|
|
--- a/arch/powerpc/kernel/head_book3s_32.S
|
|
+++ b/arch/powerpc/kernel/head_book3s_32.S
|
|
@@ -447,11 +447,12 @@ InstructionTLBMiss:
|
|
cmplw 0,r1,r3
|
|
#endif
|
|
mfspr r2, SPRN_SDR1
|
|
- li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
|
|
+ li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC | _PAGE_USER
|
|
rlwinm r2, r2, 28, 0xfffff000
|
|
#ifdef CONFIG_MODULES
|
|
bgt- 112f
|
|
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
|
|
+ li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
|
|
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
|
|
#endif
|
|
112: rlwimi r2,r3,12,20,29 /* insert top 10 bits of address */
|
|
@@ -510,10 +511,11 @@ DataLoadTLBMiss:
|
|
lis r1, TASK_SIZE@h /* check if kernel address */
|
|
cmplw 0,r1,r3
|
|
mfspr r2, SPRN_SDR1
|
|
- li r1, _PAGE_PRESENT | _PAGE_ACCESSED
|
|
+ li r1, _PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER
|
|
rlwinm r2, r2, 28, 0xfffff000
|
|
bgt- 112f
|
|
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
|
|
+ li r1, _PAGE_PRESENT | _PAGE_ACCESSED
|
|
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
|
|
112: rlwimi r2,r3,12,20,29 /* insert top 10 bits of address */
|
|
lwz r2,0(r2) /* get pmd entry */
|
|
@@ -587,10 +589,11 @@ DataStoreTLBMiss:
|
|
lis r1, TASK_SIZE@h /* check if kernel address */
|
|
cmplw 0,r1,r3
|
|
mfspr r2, SPRN_SDR1
|
|
- li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED
|
|
+ li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER
|
|
rlwinm r2, r2, 28, 0xfffff000
|
|
bgt- 112f
|
|
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
|
|
+ li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED
|
|
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
|
|
112: rlwimi r2,r3,12,20,29 /* insert top 10 bits of address */
|
|
lwz r2,0(r2) /* get pmd entry */
|
|
diff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c
|
|
index 2b555997b2950..001e90cd8948b 100644
|
|
--- a/arch/powerpc/kernel/pci-common.c
|
|
+++ b/arch/powerpc/kernel/pci-common.c
|
|
@@ -1699,3 +1699,13 @@ static void fixup_hide_host_resource_fsl(struct pci_dev *dev)
|
|
}
|
|
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MOTOROLA, PCI_ANY_ID, fixup_hide_host_resource_fsl);
|
|
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_FREESCALE, PCI_ANY_ID, fixup_hide_host_resource_fsl);
|
|
+
|
|
+
|
|
+static int __init discover_phbs(void)
|
|
+{
|
|
+ if (ppc_md.discover_phbs)
|
|
+ ppc_md.discover_phbs();
|
|
+
|
|
+ return 0;
|
|
+}
|
|
+core_initcall(discover_phbs);
|
|
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
|
|
index a66f435dabbfe..b65a73e4d6423 100644
|
|
--- a/arch/powerpc/kernel/process.c
|
|
+++ b/arch/powerpc/kernel/process.c
|
|
@@ -2176,7 +2176,7 @@ void show_stack(struct task_struct *tsk, unsigned long *stack,
|
|
* See if this is an exception frame.
|
|
* We look for the "regshere" marker in the current frame.
|
|
*/
|
|
- if (validate_sp(sp, tsk, STACK_INT_FRAME_SIZE)
|
|
+ if (validate_sp(sp, tsk, STACK_FRAME_WITH_PT_REGS)
|
|
&& stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
|
|
struct pt_regs *regs = (struct pt_regs *)
|
|
(sp + STACK_FRAME_OVERHEAD);
|
|
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
|
|
index 3ec7b443fe6bb..4be05517f2db8 100644
|
|
--- a/arch/powerpc/kernel/traps.c
|
|
+++ b/arch/powerpc/kernel/traps.c
|
|
@@ -503,8 +503,11 @@ out:
|
|
die("Unrecoverable nested System Reset", regs, SIGABRT);
|
|
#endif
|
|
/* Must die if the interrupt is not recoverable */
|
|
- if (!(regs->msr & MSR_RI))
|
|
+ if (!(regs->msr & MSR_RI)) {
|
|
+ /* For the reason explained in die_mce, nmi_exit before die */
|
|
+ nmi_exit();
|
|
die("Unrecoverable System Reset", regs, SIGABRT);
|
|
+ }
|
|
|
|
if (saved_hsrrs) {
|
|
mtspr(SPRN_HSRR0, hsrr0);
|
|
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
|
|
index bb5c20d4ca91c..c6aebc149d141 100644
|
|
--- a/arch/powerpc/lib/sstep.c
|
|
+++ b/arch/powerpc/lib/sstep.c
|
|
@@ -904,7 +904,7 @@ static nokprobe_inline int do_vsx_load(struct instruction_op *op,
|
|
if (!address_ok(regs, ea, size) || copy_mem_in(mem, ea, size, regs))
|
|
return -EFAULT;
|
|
|
|
- nr_vsx_regs = size / sizeof(__vector128);
|
|
+ nr_vsx_regs = max(1ul, size / sizeof(__vector128));
|
|
emulate_vsx_load(op, buf, mem, cross_endian);
|
|
preempt_disable();
|
|
if (reg < 32) {
|
|
@@ -951,7 +951,7 @@ static nokprobe_inline int do_vsx_store(struct instruction_op *op,
|
|
if (!address_ok(regs, ea, size))
|
|
return -EFAULT;
|
|
|
|
- nr_vsx_regs = size / sizeof(__vector128);
|
|
+ nr_vsx_regs = max(1ul, size / sizeof(__vector128));
|
|
preempt_disable();
|
|
if (reg < 32) {
|
|
/* FP regs + extensions */
|
|
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
|
|
index 28206b1fe172a..51f413521fdef 100644
|
|
--- a/arch/powerpc/perf/core-book3s.c
|
|
+++ b/arch/powerpc/perf/core-book3s.c
|
|
@@ -212,7 +212,7 @@ static inline void perf_get_data_addr(struct perf_event *event, struct pt_regs *
|
|
if (!(mmcra & MMCRA_SAMPLE_ENABLE) || sdar_valid)
|
|
*addrp = mfspr(SPRN_SDAR);
|
|
|
|
- if (is_kernel_addr(mfspr(SPRN_SDAR)) && perf_allow_kernel(&event->attr) != 0)
|
|
+ if (is_kernel_addr(mfspr(SPRN_SDAR)) && event->attr.exclude_kernel)
|
|
*addrp = 0;
|
|
}
|
|
|
|
@@ -506,7 +506,7 @@ static void power_pmu_bhrb_read(struct perf_event *event, struct cpu_hw_events *
|
|
* addresses, hence include a check before filtering code
|
|
*/
|
|
if (!(ppmu->flags & PPMU_ARCH_31) &&
|
|
- is_kernel_addr(addr) && perf_allow_kernel(&event->attr) != 0)
|
|
+ is_kernel_addr(addr) && event->attr.exclude_kernel)
|
|
continue;
|
|
|
|
/* Branches are read most recent first (ie. mfbhrb 0 is
|
|
@@ -2149,7 +2149,17 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
|
|
left += period;
|
|
if (left <= 0)
|
|
left = period;
|
|
- record = siar_valid(regs);
|
|
+
|
|
+ /*
|
|
+ * If address is not requested in the sample via
|
|
+ * PERF_SAMPLE_IP, just record that sample irrespective
|
|
+ * of SIAR valid check.
|
|
+ */
|
|
+ if (event->attr.sample_type & PERF_SAMPLE_IP)
|
|
+ record = siar_valid(regs);
|
|
+ else
|
|
+ record = 1;
|
|
+
|
|
event->hw.last_period = event->hw.sample_period;
|
|
}
|
|
if (left < 0x80000000LL)
|
|
@@ -2167,9 +2177,10 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
|
|
* MMCR2. Check attr.exclude_kernel and address to drop the sample in
|
|
* these cases.
|
|
*/
|
|
- if (event->attr.exclude_kernel && record)
|
|
- if (is_kernel_addr(mfspr(SPRN_SIAR)))
|
|
- record = 0;
|
|
+ if (event->attr.exclude_kernel &&
|
|
+ (event->attr.sample_type & PERF_SAMPLE_IP) &&
|
|
+ is_kernel_addr(mfspr(SPRN_SIAR)))
|
|
+ record = 0;
|
|
|
|
/*
|
|
* Finally record data if requested.
|
|
diff --git a/arch/powerpc/platforms/pseries/msi.c b/arch/powerpc/platforms/pseries/msi.c
|
|
index b3ac2455faadc..637300330507f 100644
|
|
--- a/arch/powerpc/platforms/pseries/msi.c
|
|
+++ b/arch/powerpc/platforms/pseries/msi.c
|
|
@@ -4,6 +4,7 @@
|
|
* Copyright 2006-2007 Michael Ellerman, IBM Corp.
|
|
*/
|
|
|
|
+#include <linux/crash_dump.h>
|
|
#include <linux/device.h>
|
|
#include <linux/irq.h>
|
|
#include <linux/msi.h>
|
|
@@ -458,8 +459,28 @@ again:
|
|
return hwirq;
|
|
}
|
|
|
|
- virq = irq_create_mapping_affinity(NULL, hwirq,
|
|
- entry->affinity);
|
|
+ /*
|
|
+ * Depending on the number of online CPUs in the original
|
|
+ * kernel, it is likely for CPU #0 to be offline in a kdump
|
|
+ * kernel. The associated IRQs in the affinity mappings
|
|
+ * provided by irq_create_affinity_masks() are thus not
|
|
+ * started by irq_startup(), as per-design for managed IRQs.
|
|
+ * This can be a problem with multi-queue block devices driven
|
|
+ * by blk-mq : such a non-started IRQ is very likely paired
|
|
+ * with the single queue enforced by blk-mq during kdump (see
|
|
+ * blk_mq_alloc_tag_set()). This causes the device to remain
|
|
+ * silent and likely hangs the guest at some point.
|
|
+ *
|
|
+ * We don't really care for fine-grained affinity when doing
|
|
+ * kdump actually : simply ignore the pre-computed affinity
|
|
+ * masks in this case and let the default mask with all CPUs
|
|
+ * be used when creating the IRQ mappings.
|
|
+ */
|
|
+ if (is_kdump_kernel())
|
|
+ virq = irq_create_mapping(NULL, hwirq);
|
|
+ else
|
|
+ virq = irq_create_mapping_affinity(NULL, hwirq,
|
|
+ entry->affinity);
|
|
|
|
if (!virq) {
|
|
pr_debug("rtas_msi: Failed mapping hwirq %d\n", hwirq);
|
|
diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
|
|
index 27c7630141148..1bae4a65416b2 100644
|
|
--- a/arch/s390/kernel/smp.c
|
|
+++ b/arch/s390/kernel/smp.c
|
|
@@ -770,7 +770,7 @@ static int smp_add_core(struct sclp_core_entry *core, cpumask_t *avail,
|
|
static int __smp_rescan_cpus(struct sclp_core_info *info, bool early)
|
|
{
|
|
struct sclp_core_entry *core;
|
|
- cpumask_t avail;
|
|
+ static cpumask_t avail;
|
|
bool configured;
|
|
u16 core_id;
|
|
int nr, i;
|
|
diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
|
|
index f94532f25db14..274217e7ed702 100644
|
|
--- a/arch/sparc/include/asm/mman.h
|
|
+++ b/arch/sparc/include/asm/mman.h
|
|
@@ -57,35 +57,39 @@ static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
|
|
{
|
|
if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
|
|
return 0;
|
|
- if (prot & PROT_ADI) {
|
|
- if (!adi_capable())
|
|
- return 0;
|
|
+ return 1;
|
|
+}
|
|
|
|
- if (addr) {
|
|
- struct vm_area_struct *vma;
|
|
+#define arch_validate_flags(vm_flags) arch_validate_flags(vm_flags)
|
|
+/* arch_validate_flags() - Ensure combination of flags is valid for a
|
|
+ * VMA.
|
|
+ */
|
|
+static inline bool arch_validate_flags(unsigned long vm_flags)
|
|
+{
|
|
+ /* If ADI is being enabled on this VMA, check for ADI
|
|
+ * capability on the platform and ensure VMA is suitable
|
|
+ * for ADI
|
|
+ */
|
|
+ if (vm_flags & VM_SPARC_ADI) {
|
|
+ if (!adi_capable())
|
|
+ return false;
|
|
|
|
- vma = find_vma(current->mm, addr);
|
|
- if (vma) {
|
|
- /* ADI can not be enabled on PFN
|
|
- * mapped pages
|
|
- */
|
|
- if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
|
|
- return 0;
|
|
+ /* ADI can not be enabled on PFN mapped pages */
|
|
+ if (vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
|
|
+ return false;
|
|
|
|
- /* Mergeable pages can become unmergeable
|
|
- * if ADI is enabled on them even if they
|
|
- * have identical data on them. This can be
|
|
- * because ADI enabled pages with identical
|
|
- * data may still not have identical ADI
|
|
- * tags on them. Disallow ADI on mergeable
|
|
- * pages.
|
|
- */
|
|
- if (vma->vm_flags & VM_MERGEABLE)
|
|
- return 0;
|
|
- }
|
|
- }
|
|
+ /* Mergeable pages can become unmergeable
|
|
+ * if ADI is enabled on them even if they
|
|
+ * have identical data on them. This can be
|
|
+ * because ADI enabled pages with identical
|
|
+ * data may still not have identical ADI
|
|
+ * tags on them. Disallow ADI on mergeable
|
|
+ * pages.
|
|
+ */
|
|
+ if (vm_flags & VM_MERGEABLE)
|
|
+ return false;
|
|
}
|
|
- return 1;
|
|
+ return true;
|
|
}
|
|
#endif /* CONFIG_SPARC64 */
|
|
|
|
diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c
|
|
index eb2946b1df8a4..6139c5700ccc9 100644
|
|
--- a/arch/sparc/mm/init_32.c
|
|
+++ b/arch/sparc/mm/init_32.c
|
|
@@ -197,6 +197,9 @@ unsigned long __init bootmem_init(unsigned long *pages_avail)
|
|
size = memblock_phys_mem_size() - memblock_reserved_size();
|
|
*pages_avail = (size >> PAGE_SHIFT) - high_pages;
|
|
|
|
+ /* Only allow low memory to be allocated via memblock allocation */
|
|
+ memblock_set_current_limit(max_low_pfn << PAGE_SHIFT);
|
|
+
|
|
return max_pfn;
|
|
}
|
|
|
|
diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
|
|
index f89ae8ada64fe..2e4d91f3feea4 100644
|
|
--- a/arch/x86/entry/common.c
|
|
+++ b/arch/x86/entry/common.c
|
|
@@ -128,7 +128,8 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
|
|
regs->ax = -EFAULT;
|
|
|
|
instrumentation_end();
|
|
- syscall_exit_to_user_mode(regs);
|
|
+ local_irq_disable();
|
|
+ irqentry_exit_to_user_mode(regs);
|
|
return false;
|
|
}
|
|
|
|
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
|
|
index 541fdaf640453..0051cf5c792d1 100644
|
|
--- a/arch/x86/entry/entry_64_compat.S
|
|
+++ b/arch/x86/entry/entry_64_compat.S
|
|
@@ -210,6 +210,8 @@ SYM_CODE_START(entry_SYSCALL_compat)
|
|
/* Switch to the kernel stack */
|
|
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
|
|
|
|
+SYM_INNER_LABEL(entry_SYSCALL_compat_safe_stack, SYM_L_GLOBAL)
|
|
+
|
|
/* Construct struct pt_regs on stack */
|
|
pushq $__USER32_DS /* pt_regs->ss */
|
|
pushq %r8 /* pt_regs->sp */
|
|
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
|
|
index 4faaef3a8f6c4..d3f5cf70c1a09 100644
|
|
--- a/arch/x86/events/intel/core.c
|
|
+++ b/arch/x86/events/intel/core.c
|
|
@@ -3578,8 +3578,10 @@ static int intel_pmu_hw_config(struct perf_event *event)
|
|
if (!(event->attr.freq || (event->attr.wakeup_events && !event->attr.watermark))) {
|
|
event->hw.flags |= PERF_X86_EVENT_AUTO_RELOAD;
|
|
if (!(event->attr.sample_type &
|
|
- ~intel_pmu_large_pebs_flags(event)))
|
|
+ ~intel_pmu_large_pebs_flags(event))) {
|
|
event->hw.flags |= PERF_X86_EVENT_LARGE_PEBS;
|
|
+ event->attach_state |= PERF_ATTACH_SCHED_CB;
|
|
+ }
|
|
}
|
|
if (x86_pmu.pebs_aliases)
|
|
x86_pmu.pebs_aliases(event);
|
|
@@ -3592,6 +3594,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
|
|
ret = intel_pmu_setup_lbr_filter(event);
|
|
if (ret)
|
|
return ret;
|
|
+ event->attach_state |= PERF_ATTACH_SCHED_CB;
|
|
|
|
/*
|
|
* BTS is set up earlier in this path, so don't account twice
|
|
diff --git a/arch/x86/include/asm/insn-eval.h b/arch/x86/include/asm/insn-eval.h
|
|
index a0f839aa144d9..98b4dae5e8bc8 100644
|
|
--- a/arch/x86/include/asm/insn-eval.h
|
|
+++ b/arch/x86/include/asm/insn-eval.h
|
|
@@ -23,6 +23,8 @@ unsigned long insn_get_seg_base(struct pt_regs *regs, int seg_reg_idx);
|
|
int insn_get_code_seg_params(struct pt_regs *regs);
|
|
int insn_fetch_from_user(struct pt_regs *regs,
|
|
unsigned char buf[MAX_INSN_SIZE]);
|
|
+int insn_fetch_from_user_inatomic(struct pt_regs *regs,
|
|
+ unsigned char buf[MAX_INSN_SIZE]);
|
|
bool insn_decode(struct insn *insn, struct pt_regs *regs,
|
|
unsigned char buf[MAX_INSN_SIZE], int buf_size);
|
|
|
|
diff --git a/arch/x86/include/asm/proto.h b/arch/x86/include/asm/proto.h
|
|
index 2c35f1c01a2df..b6a9d51d1d791 100644
|
|
--- a/arch/x86/include/asm/proto.h
|
|
+++ b/arch/x86/include/asm/proto.h
|
|
@@ -25,6 +25,7 @@ void __end_SYSENTER_singlestep_region(void);
|
|
void entry_SYSENTER_compat(void);
|
|
void __end_entry_SYSENTER_compat(void);
|
|
void entry_SYSCALL_compat(void);
|
|
+void entry_SYSCALL_compat_safe_stack(void);
|
|
void entry_INT80_compat(void);
|
|
#ifdef CONFIG_XEN_PV
|
|
void xen_entry_INT80_compat(void);
|
|
diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
|
|
index d8324a2366961..409f661481e11 100644
|
|
--- a/arch/x86/include/asm/ptrace.h
|
|
+++ b/arch/x86/include/asm/ptrace.h
|
|
@@ -94,6 +94,8 @@ struct pt_regs {
|
|
#include <asm/paravirt_types.h>
|
|
#endif
|
|
|
|
+#include <asm/proto.h>
|
|
+
|
|
struct cpuinfo_x86;
|
|
struct task_struct;
|
|
|
|
@@ -175,6 +177,19 @@ static inline bool any_64bit_mode(struct pt_regs *regs)
|
|
#ifdef CONFIG_X86_64
|
|
#define current_user_stack_pointer() current_pt_regs()->sp
|
|
#define compat_user_stack_pointer() current_pt_regs()->sp
|
|
+
|
|
+static inline bool ip_within_syscall_gap(struct pt_regs *regs)
|
|
+{
|
|
+ bool ret = (regs->ip >= (unsigned long)entry_SYSCALL_64 &&
|
|
+ regs->ip < (unsigned long)entry_SYSCALL_64_safe_stack);
|
|
+
|
|
+#ifdef CONFIG_IA32_EMULATION
|
|
+ ret = ret || (regs->ip >= (unsigned long)entry_SYSCALL_compat &&
|
|
+ regs->ip < (unsigned long)entry_SYSCALL_compat_safe_stack);
|
|
+#endif
|
|
+
|
|
+ return ret;
|
|
+}
|
|
#endif
|
|
|
|
static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
|
|
diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
|
|
index aa593743acf67..1fc0962c89c08 100644
|
|
--- a/arch/x86/kernel/kvmclock.c
|
|
+++ b/arch/x86/kernel/kvmclock.c
|
|
@@ -268,21 +268,20 @@ static void __init kvmclock_init_mem(void)
|
|
|
|
static int __init kvm_setup_vsyscall_timeinfo(void)
|
|
{
|
|
-#ifdef CONFIG_X86_64
|
|
- u8 flags;
|
|
+ kvmclock_init_mem();
|
|
|
|
- if (!per_cpu(hv_clock_per_cpu, 0) || !kvmclock_vsyscall)
|
|
- return 0;
|
|
+#ifdef CONFIG_X86_64
|
|
+ if (per_cpu(hv_clock_per_cpu, 0) && kvmclock_vsyscall) {
|
|
+ u8 flags;
|
|
|
|
- flags = pvclock_read_flags(&hv_clock_boot[0].pvti);
|
|
- if (!(flags & PVCLOCK_TSC_STABLE_BIT))
|
|
- return 0;
|
|
+ flags = pvclock_read_flags(&hv_clock_boot[0].pvti);
|
|
+ if (!(flags & PVCLOCK_TSC_STABLE_BIT))
|
|
+ return 0;
|
|
|
|
- kvm_clock.vdso_clock_mode = VDSO_CLOCKMODE_PVCLOCK;
|
|
+ kvm_clock.vdso_clock_mode = VDSO_CLOCKMODE_PVCLOCK;
|
|
+ }
|
|
#endif
|
|
|
|
- kvmclock_init_mem();
|
|
-
|
|
return 0;
|
|
}
|
|
early_initcall(kvm_setup_vsyscall_timeinfo);
|
|
diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
|
|
index 84c1821819afb..04a780abb512d 100644
|
|
--- a/arch/x86/kernel/sev-es.c
|
|
+++ b/arch/x86/kernel/sev-es.c
|
|
@@ -121,8 +121,18 @@ static void __init setup_vc_stacks(int cpu)
|
|
cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
|
|
}
|
|
|
|
-static __always_inline bool on_vc_stack(unsigned long sp)
|
|
+static __always_inline bool on_vc_stack(struct pt_regs *regs)
|
|
{
|
|
+ unsigned long sp = regs->sp;
|
|
+
|
|
+ /* User-mode RSP is not trusted */
|
|
+ if (user_mode(regs))
|
|
+ return false;
|
|
+
|
|
+ /* SYSCALL gap still has user-mode RSP */
|
|
+ if (ip_within_syscall_gap(regs))
|
|
+ return false;
|
|
+
|
|
return ((sp >= __this_cpu_ist_bottom_va(VC)) && (sp < __this_cpu_ist_top_va(VC)));
|
|
}
|
|
|
|
@@ -144,7 +154,7 @@ void noinstr __sev_es_ist_enter(struct pt_regs *regs)
|
|
old_ist = __this_cpu_read(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC]);
|
|
|
|
/* Make room on the IST stack */
|
|
- if (on_vc_stack(regs->sp))
|
|
+ if (on_vc_stack(regs))
|
|
new_ist = ALIGN_DOWN(regs->sp, 8) - sizeof(old_ist);
|
|
else
|
|
new_ist = old_ist - sizeof(old_ist);
|
|
@@ -248,7 +258,7 @@ static enum es_result vc_decode_insn(struct es_em_ctxt *ctxt)
|
|
int res;
|
|
|
|
if (user_mode(ctxt->regs)) {
|
|
- res = insn_fetch_from_user(ctxt->regs, buffer);
|
|
+ res = insn_fetch_from_user_inatomic(ctxt->regs, buffer);
|
|
if (!res) {
|
|
ctxt->fi.vector = X86_TRAP_PF;
|
|
ctxt->fi.error_code = X86_PF_INSTR | X86_PF_USER;
|
|
@@ -1248,13 +1258,12 @@ static __always_inline bool on_vc_fallback_stack(struct pt_regs *regs)
|
|
DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
|
|
{
|
|
struct sev_es_runtime_data *data = this_cpu_read(runtime_data);
|
|
+ irqentry_state_t irq_state;
|
|
struct ghcb_state state;
|
|
struct es_em_ctxt ctxt;
|
|
enum es_result result;
|
|
struct ghcb *ghcb;
|
|
|
|
- lockdep_assert_irqs_disabled();
|
|
-
|
|
/*
|
|
* Handle #DB before calling into !noinstr code to avoid recursive #DB.
|
|
*/
|
|
@@ -1263,6 +1272,8 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
|
|
return;
|
|
}
|
|
|
|
+ irq_state = irqentry_nmi_enter(regs);
|
|
+ lockdep_assert_irqs_disabled();
|
|
instrumentation_begin();
|
|
|
|
/*
|
|
@@ -1325,6 +1336,7 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
|
|
|
|
out:
|
|
instrumentation_end();
|
|
+ irqentry_nmi_exit(regs, irq_state);
|
|
|
|
return;
|
|
|
|
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
|
|
index 7f5aec758f0ee..ac1874a2a70e8 100644
|
|
--- a/arch/x86/kernel/traps.c
|
|
+++ b/arch/x86/kernel/traps.c
|
|
@@ -694,8 +694,7 @@ asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *r
|
|
* In the SYSCALL entry path the RSP value comes from user-space - don't
|
|
* trust it and switch to the current kernel stack
|
|
*/
|
|
- if (regs->ip >= (unsigned long)entry_SYSCALL_64 &&
|
|
- regs->ip < (unsigned long)entry_SYSCALL_64_safe_stack) {
|
|
+ if (ip_within_syscall_gap(regs)) {
|
|
sp = this_cpu_read(cpu_current_top_of_stack);
|
|
goto sync;
|
|
}
|
|
diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
|
|
index 73f8001000669..c451d5f6422f6 100644
|
|
--- a/arch/x86/kernel/unwind_orc.c
|
|
+++ b/arch/x86/kernel/unwind_orc.c
|
|
@@ -367,8 +367,8 @@ static bool deref_stack_regs(struct unwind_state *state, unsigned long addr,
|
|
if (!stack_access_ok(state, addr, sizeof(struct pt_regs)))
|
|
return false;
|
|
|
|
- *ip = regs->ip;
|
|
- *sp = regs->sp;
|
|
+ *ip = READ_ONCE_NOCHECK(regs->ip);
|
|
+ *sp = READ_ONCE_NOCHECK(regs->sp);
|
|
return true;
|
|
}
|
|
|
|
@@ -380,8 +380,8 @@ static bool deref_stack_iret_regs(struct unwind_state *state, unsigned long addr
|
|
if (!stack_access_ok(state, addr, IRET_FRAME_SIZE))
|
|
return false;
|
|
|
|
- *ip = regs->ip;
|
|
- *sp = regs->sp;
|
|
+ *ip = READ_ONCE_NOCHECK(regs->ip);
|
|
+ *sp = READ_ONCE_NOCHECK(regs->sp);
|
|
return true;
|
|
}
|
|
|
|
@@ -402,12 +402,12 @@ static bool get_reg(struct unwind_state *state, unsigned int reg_off,
|
|
return false;
|
|
|
|
if (state->full_regs) {
|
|
- *val = ((unsigned long *)state->regs)[reg];
|
|
+ *val = READ_ONCE_NOCHECK(((unsigned long *)state->regs)[reg]);
|
|
return true;
|
|
}
|
|
|
|
if (state->prev_regs) {
|
|
- *val = ((unsigned long *)state->prev_regs)[reg];
|
|
+ *val = READ_ONCE_NOCHECK(((unsigned long *)state->prev_regs)[reg]);
|
|
return true;
|
|
}
|
|
|
|
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
|
|
index 43cceadd073ed..570fa298083cd 100644
|
|
--- a/arch/x86/kvm/lapic.c
|
|
+++ b/arch/x86/kvm/lapic.c
|
|
@@ -1641,7 +1641,16 @@ static void apic_timer_expired(struct kvm_lapic *apic, bool from_timer_fn)
|
|
}
|
|
|
|
if (kvm_use_posted_timer_interrupt(apic->vcpu)) {
|
|
- kvm_wait_lapic_expire(vcpu);
|
|
+ /*
|
|
+ * Ensure the guest's timer has truly expired before posting an
|
|
+ * interrupt. Open code the relevant checks to avoid querying
|
|
+ * lapic_timer_int_injected(), which will be false since the
|
|
+ * interrupt isn't yet injected. Waiting until after injecting
|
|
+ * is not an option since that won't help a posted interrupt.
|
|
+ */
|
|
+ if (vcpu->arch.apic->lapic_timer.expired_tscdeadline &&
|
|
+ vcpu->arch.apic->lapic_timer.timer_advance_ns)
|
|
+ __kvm_wait_lapic_expire(vcpu);
|
|
kvm_apic_inject_pending_timer_irqs(apic);
|
|
return;
|
|
}
|
|
diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c
|
|
index 4229950a5d78c..bb0b3fe1e0a02 100644
|
|
--- a/arch/x86/lib/insn-eval.c
|
|
+++ b/arch/x86/lib/insn-eval.c
|
|
@@ -1415,6 +1415,25 @@ void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs)
|
|
}
|
|
}
|
|
|
|
+static unsigned long insn_get_effective_ip(struct pt_regs *regs)
|
|
+{
|
|
+ unsigned long seg_base = 0;
|
|
+
|
|
+ /*
|
|
+ * If not in user-space long mode, a custom code segment could be in
|
|
+ * use. This is true in protected mode (if the process defined a local
|
|
+ * descriptor table), or virtual-8086 mode. In most of the cases
|
|
+ * seg_base will be zero as in USER_CS.
|
|
+ */
|
|
+ if (!user_64bit_mode(regs)) {
|
|
+ seg_base = insn_get_seg_base(regs, INAT_SEG_REG_CS);
|
|
+ if (seg_base == -1L)
|
|
+ return 0;
|
|
+ }
|
|
+
|
|
+ return seg_base + regs->ip;
|
|
+}
|
|
+
|
|
/**
|
|
* insn_fetch_from_user() - Copy instruction bytes from user-space memory
|
|
* @regs: Structure with register values as seen when entering kernel mode
|
|
@@ -1431,24 +1450,43 @@ void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs)
|
|
*/
|
|
int insn_fetch_from_user(struct pt_regs *regs, unsigned char buf[MAX_INSN_SIZE])
|
|
{
|
|
- unsigned long seg_base = 0;
|
|
+ unsigned long ip;
|
|
int not_copied;
|
|
|
|
- /*
|
|
- * If not in user-space long mode, a custom code segment could be in
|
|
- * use. This is true in protected mode (if the process defined a local
|
|
- * descriptor table), or virtual-8086 mode. In most of the cases
|
|
- * seg_base will be zero as in USER_CS.
|
|
- */
|
|
- if (!user_64bit_mode(regs)) {
|
|
- seg_base = insn_get_seg_base(regs, INAT_SEG_REG_CS);
|
|
- if (seg_base == -1L)
|
|
- return 0;
|
|
- }
|
|
+ ip = insn_get_effective_ip(regs);
|
|
+ if (!ip)
|
|
+ return 0;
|
|
+
|
|
+ not_copied = copy_from_user(buf, (void __user *)ip, MAX_INSN_SIZE);
|
|
|
|
+ return MAX_INSN_SIZE - not_copied;
|
|
+}
|
|
+
|
|
+/**
|
|
+ * insn_fetch_from_user_inatomic() - Copy instruction bytes from user-space memory
|
|
+ * while in atomic code
|
|
+ * @regs: Structure with register values as seen when entering kernel mode
|
|
+ * @buf: Array to store the fetched instruction
|
|
+ *
|
|
+ * Gets the linear address of the instruction and copies the instruction bytes
|
|
+ * to the buf. This function must be used in atomic context.
|
|
+ *
|
|
+ * Returns:
|
|
+ *
|
|
+ * Number of instruction bytes copied.
|
|
+ *
|
|
+ * 0 if nothing was copied.
|
|
+ */
|
|
+int insn_fetch_from_user_inatomic(struct pt_regs *regs, unsigned char buf[MAX_INSN_SIZE])
|
|
+{
|
|
+ unsigned long ip;
|
|
+ int not_copied;
|
|
+
|
|
+ ip = insn_get_effective_ip(regs);
|
|
+ if (!ip)
|
|
+ return 0;
|
|
|
|
- not_copied = copy_from_user(buf, (void __user *)(seg_base + regs->ip),
|
|
- MAX_INSN_SIZE);
|
|
+ not_copied = __copy_from_user_inatomic(buf, (void __user *)ip, MAX_INSN_SIZE);
|
|
|
|
return MAX_INSN_SIZE - not_copied;
|
|
}
|
|
diff --git a/block/blk-zoned.c b/block/blk-zoned.c
|
|
index 7a68b6e4300ce..df0ecf6790d35 100644
|
|
--- a/block/blk-zoned.c
|
|
+++ b/block/blk-zoned.c
|
|
@@ -318,6 +318,22 @@ int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode,
|
|
return 0;
|
|
}
|
|
|
|
+static int blkdev_truncate_zone_range(struct block_device *bdev, fmode_t mode,
|
|
+ const struct blk_zone_range *zrange)
|
|
+{
|
|
+ loff_t start, end;
|
|
+
|
|
+ if (zrange->sector + zrange->nr_sectors <= zrange->sector ||
|
|
+ zrange->sector + zrange->nr_sectors > get_capacity(bdev->bd_disk))
|
|
+ /* Out of range */
|
|
+ return -EINVAL;
|
|
+
|
|
+ start = zrange->sector << SECTOR_SHIFT;
|
|
+ end = ((zrange->sector + zrange->nr_sectors) << SECTOR_SHIFT) - 1;
|
|
+
|
|
+ return truncate_bdev_range(bdev, mode, start, end);
|
|
+}
|
|
+
|
|
/*
|
|
* BLKRESETZONE, BLKOPENZONE, BLKCLOSEZONE and BLKFINISHZONE ioctl processing.
|
|
* Called from blkdev_ioctl.
|
|
@@ -329,6 +345,7 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
|
|
struct request_queue *q;
|
|
struct blk_zone_range zrange;
|
|
enum req_opf op;
|
|
+ int ret;
|
|
|
|
if (!argp)
|
|
return -EINVAL;
|
|
@@ -352,6 +369,11 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
|
|
switch (cmd) {
|
|
case BLKRESETZONE:
|
|
op = REQ_OP_ZONE_RESET;
|
|
+
|
|
+ /* Invalidate the page cache, including dirty pages. */
|
|
+ ret = blkdev_truncate_zone_range(bdev, mode, &zrange);
|
|
+ if (ret)
|
|
+ return ret;
|
|
break;
|
|
case BLKOPENZONE:
|
|
op = REQ_OP_ZONE_OPEN;
|
|
@@ -366,8 +388,20 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
|
|
return -ENOTTY;
|
|
}
|
|
|
|
- return blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors,
|
|
- GFP_KERNEL);
|
|
+ ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors,
|
|
+ GFP_KERNEL);
|
|
+
|
|
+ /*
|
|
+ * Invalidate the page cache again for zone reset: writes can only be
|
|
+ * direct for zoned devices so concurrent writes would not add any page
|
|
+ * to the page cache after/during reset. The page cache may be filled
|
|
+ * again due to concurrent reads though and dropping the pages for
|
|
+ * these is fine.
|
|
+ */
|
|
+ if (!ret && cmd == BLKRESETZONE)
|
|
+ ret = blkdev_truncate_zone_range(bdev, mode, &zrange);
|
|
+
|
|
+ return ret;
|
|
}
|
|
|
|
static inline unsigned long *blk_alloc_zone_bitmap(int node,
|
|
diff --git a/crypto/Kconfig b/crypto/Kconfig
|
|
index a367fcfeb5d45..3913e409ba884 100644
|
|
--- a/crypto/Kconfig
|
|
+++ b/crypto/Kconfig
|
|
@@ -772,7 +772,7 @@ config CRYPTO_POLY1305_X86_64
|
|
|
|
config CRYPTO_POLY1305_MIPS
|
|
tristate "Poly1305 authenticator algorithm (MIPS optimized)"
|
|
- depends on CPU_MIPS32 || (CPU_MIPS64 && 64BIT)
|
|
+ depends on MIPS
|
|
select CRYPTO_ARCH_HAVE_LIB_POLY1305
|
|
|
|
config CRYPTO_MD4
|
|
diff --git a/drivers/base/memory.c b/drivers/base/memory.c
|
|
index eef4ffb6122c9..de058d15b33ea 100644
|
|
--- a/drivers/base/memory.c
|
|
+++ b/drivers/base/memory.c
|
|
@@ -290,20 +290,20 @@ static ssize_t state_store(struct device *dev, struct device_attribute *attr,
|
|
}
|
|
|
|
/*
|
|
- * phys_device is a bad name for this. What I really want
|
|
- * is a way to differentiate between memory ranges that
|
|
- * are part of physical devices that constitute
|
|
- * a complete removable unit or fru.
|
|
- * i.e. do these ranges belong to the same physical device,
|
|
- * s.t. if I offline all of these sections I can then
|
|
- * remove the physical device?
|
|
+ * Legacy interface that we cannot remove: s390x exposes the storage increment
|
|
+ * covered by a memory block, allowing for identifying which memory blocks
|
|
+ * comprise a storage increment. Since a memory block spans complete
|
|
+ * storage increments nowadays, this interface is basically unused. Other
|
|
+ * archs never exposed != 0.
|
|
*/
|
|
static ssize_t phys_device_show(struct device *dev,
|
|
struct device_attribute *attr, char *buf)
|
|
{
|
|
struct memory_block *mem = to_memory_block(dev);
|
|
+ unsigned long start_pfn = section_nr_to_pfn(mem->start_section_nr);
|
|
|
|
- return sysfs_emit(buf, "%d\n", mem->phys_device);
|
|
+ return sysfs_emit(buf, "%d\n",
|
|
+ arch_get_memory_phys_device(start_pfn));
|
|
}
|
|
|
|
#ifdef CONFIG_MEMORY_HOTREMOVE
|
|
@@ -488,11 +488,7 @@ static DEVICE_ATTR_WO(soft_offline_page);
|
|
static DEVICE_ATTR_WO(hard_offline_page);
|
|
#endif
|
|
|
|
-/*
|
|
- * Note that phys_device is optional. It is here to allow for
|
|
- * differentiation between which *physical* devices each
|
|
- * section belongs to...
|
|
- */
|
|
+/* See phys_device_show(). */
|
|
int __weak arch_get_memory_phys_device(unsigned long start_pfn)
|
|
{
|
|
return 0;
|
|
@@ -574,7 +570,6 @@ int register_memory(struct memory_block *memory)
|
|
static int init_memory_block(unsigned long block_id, unsigned long state)
|
|
{
|
|
struct memory_block *mem;
|
|
- unsigned long start_pfn;
|
|
int ret = 0;
|
|
|
|
mem = find_memory_block_by_id(block_id);
|
|
@@ -588,8 +583,6 @@ static int init_memory_block(unsigned long block_id, unsigned long state)
|
|
|
|
mem->start_section_nr = block_id * sections_per_block;
|
|
mem->state = state;
|
|
- start_pfn = section_nr_to_pfn(mem->start_section_nr);
|
|
- mem->phys_device = arch_get_memory_phys_device(start_pfn);
|
|
mem->nid = NUMA_NO_NODE;
|
|
|
|
ret = register_memory(mem);
|
|
diff --git a/drivers/base/swnode.c b/drivers/base/swnode.c
|
|
index 4fcc1a6fb724c..fbfb01ff18565 100644
|
|
--- a/drivers/base/swnode.c
|
|
+++ b/drivers/base/swnode.c
|
|
@@ -786,6 +786,9 @@ int software_node_register(const struct software_node *node)
|
|
if (software_node_to_swnode(node))
|
|
return -EEXIST;
|
|
|
|
+ if (node->parent && !parent)
|
|
+ return -EINVAL;
|
|
+
|
|
return PTR_ERR_OR_ZERO(swnode_register(node, parent, 0));
|
|
}
|
|
EXPORT_SYMBOL_GPL(software_node_register);
|
|
diff --git a/drivers/base/test/Makefile b/drivers/base/test/Makefile
|
|
index 3ca56367c84b7..2f15fae8625f1 100644
|
|
--- a/drivers/base/test/Makefile
|
|
+++ b/drivers/base/test/Makefile
|
|
@@ -2,3 +2,4 @@
|
|
obj-$(CONFIG_TEST_ASYNC_DRIVER_PROBE) += test_async_driver_probe.o
|
|
|
|
obj-$(CONFIG_KUNIT_DRIVER_PE_TEST) += property-entry-test.o
|
|
+CFLAGS_REMOVE_property-entry-test.o += -fplugin-arg-structleak_plugin-byref -fplugin-arg-structleak_plugin-byref-all
|
|
diff --git a/drivers/block/rsxx/core.c b/drivers/block/rsxx/core.c
|
|
index 5ac1881396afb..227e1be4c6f99 100644
|
|
--- a/drivers/block/rsxx/core.c
|
|
+++ b/drivers/block/rsxx/core.c
|
|
@@ -871,6 +871,7 @@ static int rsxx_pci_probe(struct pci_dev *dev,
|
|
card->event_wq = create_singlethread_workqueue(DRIVER_NAME"_event");
|
|
if (!card->event_wq) {
|
|
dev_err(CARD_TO_DEV(card), "Failed card event setup.\n");
|
|
+ st = -ENOMEM;
|
|
goto failed_event_handler;
|
|
}
|
|
|
|
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
|
|
index 3279969fc99cb..37d11103a706d 100644
|
|
--- a/drivers/block/zram/zram_drv.c
|
|
+++ b/drivers/block/zram/zram_drv.c
|
|
@@ -628,7 +628,7 @@ static ssize_t writeback_store(struct device *dev,
|
|
struct bio_vec bio_vec;
|
|
struct page *page;
|
|
ssize_t ret = len;
|
|
- int mode;
|
|
+ int mode, err;
|
|
unsigned long blk_idx = 0;
|
|
|
|
if (sysfs_streq(buf, "idle"))
|
|
@@ -639,8 +639,8 @@ static ssize_t writeback_store(struct device *dev,
|
|
if (strncmp(buf, PAGE_WB_SIG, sizeof(PAGE_WB_SIG) - 1))
|
|
return -EINVAL;
|
|
|
|
- ret = kstrtol(buf + sizeof(PAGE_WB_SIG) - 1, 10, &index);
|
|
- if (ret || index >= nr_pages)
|
|
+ if (kstrtol(buf + sizeof(PAGE_WB_SIG) - 1, 10, &index) ||
|
|
+ index >= nr_pages)
|
|
return -EINVAL;
|
|
|
|
nr_pages = 1;
|
|
@@ -664,7 +664,7 @@ static ssize_t writeback_store(struct device *dev,
|
|
goto release_init_lock;
|
|
}
|
|
|
|
- while (nr_pages--) {
|
|
+ for (; nr_pages != 0; index++, nr_pages--) {
|
|
struct bio_vec bvec;
|
|
|
|
bvec.bv_page = page;
|
|
@@ -729,12 +729,17 @@ static ssize_t writeback_store(struct device *dev,
|
|
* XXX: A single page IO would be inefficient for write
|
|
* but it would be not bad as starter.
|
|
*/
|
|
- ret = submit_bio_wait(&bio);
|
|
- if (ret) {
|
|
+ err = submit_bio_wait(&bio);
|
|
+ if (err) {
|
|
zram_slot_lock(zram, index);
|
|
zram_clear_flag(zram, index, ZRAM_UNDER_WB);
|
|
zram_clear_flag(zram, index, ZRAM_IDLE);
|
|
zram_slot_unlock(zram, index);
|
|
+ /*
|
|
+ * Return last IO error unless every IO were
|
|
+ * not suceeded.
|
|
+ */
|
|
+ ret = err;
|
|
continue;
|
|
}
|
|
|
|
diff --git a/drivers/clk/qcom/gdsc.c b/drivers/clk/qcom/gdsc.c
|
|
index af26e0695b866..51ed640e527b4 100644
|
|
--- a/drivers/clk/qcom/gdsc.c
|
|
+++ b/drivers/clk/qcom/gdsc.c
|
|
@@ -183,7 +183,10 @@ static inline int gdsc_assert_reset(struct gdsc *sc)
|
|
static inline void gdsc_force_mem_on(struct gdsc *sc)
|
|
{
|
|
int i;
|
|
- u32 mask = RETAIN_MEM | RETAIN_PERIPH;
|
|
+ u32 mask = RETAIN_MEM;
|
|
+
|
|
+ if (!(sc->flags & NO_RET_PERIPH))
|
|
+ mask |= RETAIN_PERIPH;
|
|
|
|
for (i = 0; i < sc->cxc_count; i++)
|
|
regmap_update_bits(sc->regmap, sc->cxcs[i], mask, mask);
|
|
@@ -192,7 +195,10 @@ static inline void gdsc_force_mem_on(struct gdsc *sc)
|
|
static inline void gdsc_clear_mem_on(struct gdsc *sc)
|
|
{
|
|
int i;
|
|
- u32 mask = RETAIN_MEM | RETAIN_PERIPH;
|
|
+ u32 mask = RETAIN_MEM;
|
|
+
|
|
+ if (!(sc->flags & NO_RET_PERIPH))
|
|
+ mask |= RETAIN_PERIPH;
|
|
|
|
for (i = 0; i < sc->cxc_count; i++)
|
|
regmap_update_bits(sc->regmap, sc->cxcs[i], mask, 0);
|
|
diff --git a/drivers/clk/qcom/gdsc.h b/drivers/clk/qcom/gdsc.h
|
|
index bd537438c7932..5bb396b344d16 100644
|
|
--- a/drivers/clk/qcom/gdsc.h
|
|
+++ b/drivers/clk/qcom/gdsc.h
|
|
@@ -42,7 +42,7 @@ struct gdsc {
|
|
#define PWRSTS_ON BIT(2)
|
|
#define PWRSTS_OFF_ON (PWRSTS_OFF | PWRSTS_ON)
|
|
#define PWRSTS_RET_ON (PWRSTS_RET | PWRSTS_ON)
|
|
- const u8 flags;
|
|
+ const u16 flags;
|
|
#define VOTABLE BIT(0)
|
|
#define CLAMP_IO BIT(1)
|
|
#define HW_CTRL BIT(2)
|
|
@@ -51,6 +51,7 @@ struct gdsc {
|
|
#define POLL_CFG_GDSCR BIT(5)
|
|
#define ALWAYS_ON BIT(6)
|
|
#define RETAIN_FF_ENABLE BIT(7)
|
|
+#define NO_RET_PERIPH BIT(8)
|
|
struct reset_controller_dev *rcdev;
|
|
unsigned int *resets;
|
|
unsigned int reset_count;
|
|
diff --git a/drivers/clk/qcom/gpucc-msm8998.c b/drivers/clk/qcom/gpucc-msm8998.c
|
|
index 9b3923af02a14..1a518c4915b4b 100644
|
|
--- a/drivers/clk/qcom/gpucc-msm8998.c
|
|
+++ b/drivers/clk/qcom/gpucc-msm8998.c
|
|
@@ -253,12 +253,16 @@ static struct gdsc gpu_cx_gdsc = {
|
|
static struct gdsc gpu_gx_gdsc = {
|
|
.gdscr = 0x1094,
|
|
.clamp_io_ctrl = 0x130,
|
|
+ .resets = (unsigned int []){ GPU_GX_BCR },
|
|
+ .reset_count = 1,
|
|
+ .cxcs = (unsigned int []){ 0x1098 },
|
|
+ .cxc_count = 1,
|
|
.pd = {
|
|
.name = "gpu_gx",
|
|
},
|
|
.parent = &gpu_cx_gdsc.pd,
|
|
- .pwrsts = PWRSTS_OFF_ON,
|
|
- .flags = CLAMP_IO | AON_RESET,
|
|
+ .pwrsts = PWRSTS_OFF_ON | PWRSTS_RET,
|
|
+ .flags = CLAMP_IO | SW_RESET | AON_RESET | NO_RET_PERIPH,
|
|
};
|
|
|
|
static struct clk_regmap *gpucc_msm8998_clocks[] = {
|
|
diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
|
|
index 2726e77c9e5a9..6de07556665b1 100644
|
|
--- a/drivers/cpufreq/qcom-cpufreq-hw.c
|
|
+++ b/drivers/cpufreq/qcom-cpufreq-hw.c
|
|
@@ -317,9 +317,9 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
|
|
}
|
|
|
|
base = ioremap(res->start, resource_size(res));
|
|
- if (IS_ERR(base)) {
|
|
+ if (!base) {
|
|
dev_err(dev, "failed to map resource %pR\n", res);
|
|
- ret = PTR_ERR(base);
|
|
+ ret = -ENOMEM;
|
|
goto release_region;
|
|
}
|
|
|
|
@@ -368,7 +368,7 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
|
|
error:
|
|
kfree(data);
|
|
unmap_base:
|
|
- iounmap(data->base);
|
|
+ iounmap(base);
|
|
release_region:
|
|
release_mem_region(res->start, resource_size(res));
|
|
return ret;
|
|
diff --git a/drivers/firmware/efi/libstub/efi-stub.c b/drivers/firmware/efi/libstub/efi-stub.c
|
|
index ec2f3985bef35..26e69788f27a4 100644
|
|
--- a/drivers/firmware/efi/libstub/efi-stub.c
|
|
+++ b/drivers/firmware/efi/libstub/efi-stub.c
|
|
@@ -96,6 +96,18 @@ static void install_memreserve_table(void)
|
|
efi_err("Failed to install memreserve config table!\n");
|
|
}
|
|
|
|
+static u32 get_supported_rt_services(void)
|
|
+{
|
|
+ const efi_rt_properties_table_t *rt_prop_table;
|
|
+ u32 supported = EFI_RT_SUPPORTED_ALL;
|
|
+
|
|
+ rt_prop_table = get_efi_config_table(EFI_RT_PROPERTIES_TABLE_GUID);
|
|
+ if (rt_prop_table)
|
|
+ supported &= rt_prop_table->runtime_services_supported;
|
|
+
|
|
+ return supported;
|
|
+}
|
|
+
|
|
/*
|
|
* EFI entry point for the arm/arm64 EFI stubs. This is the entrypoint
|
|
* that is described in the PE/COFF header. Most of the code is the same
|
|
@@ -250,6 +262,10 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
|
|
(prop_tbl->memory_protection_attribute &
|
|
EFI_PROPERTIES_RUNTIME_MEMORY_PROTECTION_NON_EXECUTABLE_PE_DATA);
|
|
|
|
+ /* force efi_novamap if SetVirtualAddressMap() is unsupported */
|
|
+ efi_novamap |= !(get_supported_rt_services() &
|
|
+ EFI_RT_SUPPORTED_SET_VIRTUAL_ADDRESS_MAP);
|
|
+
|
|
/* hibernation expects the runtime regions to stay in the same place */
|
|
if (!IS_ENABLED(CONFIG_HIBERNATION) && !efi_nokaslr && !flat_va_mapping) {
|
|
/*
|
|
diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
|
|
index 825b362eb4b7d..6898c27f71f85 100644
|
|
--- a/drivers/gpio/gpio-pca953x.c
|
|
+++ b/drivers/gpio/gpio-pca953x.c
|
|
@@ -112,8 +112,29 @@ MODULE_DEVICE_TABLE(i2c, pca953x_id);
|
|
#ifdef CONFIG_GPIO_PCA953X_IRQ
|
|
|
|
#include <linux/dmi.h>
|
|
-#include <linux/gpio.h>
|
|
-#include <linux/list.h>
|
|
+
|
|
+static const struct acpi_gpio_params pca953x_irq_gpios = { 0, 0, true };
|
|
+
|
|
+static const struct acpi_gpio_mapping pca953x_acpi_irq_gpios[] = {
|
|
+ { "irq-gpios", &pca953x_irq_gpios, 1, ACPI_GPIO_QUIRK_ABSOLUTE_NUMBER },
|
|
+ { }
|
|
+};
|
|
+
|
|
+static int pca953x_acpi_get_irq(struct device *dev)
|
|
+{
|
|
+ int ret;
|
|
+
|
|
+ ret = devm_acpi_dev_add_driver_gpios(dev, pca953x_acpi_irq_gpios);
|
|
+ if (ret)
|
|
+ dev_warn(dev, "can't add GPIO ACPI mapping\n");
|
|
+
|
|
+ ret = acpi_dev_gpio_irq_get_by(ACPI_COMPANION(dev), "irq-gpios", 0);
|
|
+ if (ret < 0)
|
|
+ return ret;
|
|
+
|
|
+ dev_info(dev, "ACPI interrupt quirk (IRQ %d)\n", ret);
|
|
+ return ret;
|
|
+}
|
|
|
|
static const struct dmi_system_id pca953x_dmi_acpi_irq_info[] = {
|
|
{
|
|
@@ -132,59 +153,6 @@ static const struct dmi_system_id pca953x_dmi_acpi_irq_info[] = {
|
|
},
|
|
{}
|
|
};
|
|
-
|
|
-#ifdef CONFIG_ACPI
|
|
-static int pca953x_acpi_get_pin(struct acpi_resource *ares, void *data)
|
|
-{
|
|
- struct acpi_resource_gpio *agpio;
|
|
- int *pin = data;
|
|
-
|
|
- if (acpi_gpio_get_irq_resource(ares, &agpio))
|
|
- *pin = agpio->pin_table[0];
|
|
- return 1;
|
|
-}
|
|
-
|
|
-static int pca953x_acpi_find_pin(struct device *dev)
|
|
-{
|
|
- struct acpi_device *adev = ACPI_COMPANION(dev);
|
|
- int pin = -ENOENT, ret;
|
|
- LIST_HEAD(r);
|
|
-
|
|
- ret = acpi_dev_get_resources(adev, &r, pca953x_acpi_get_pin, &pin);
|
|
- acpi_dev_free_resource_list(&r);
|
|
- if (ret < 0)
|
|
- return ret;
|
|
-
|
|
- return pin;
|
|
-}
|
|
-#else
|
|
-static inline int pca953x_acpi_find_pin(struct device *dev) { return -ENXIO; }
|
|
-#endif
|
|
-
|
|
-static int pca953x_acpi_get_irq(struct device *dev)
|
|
-{
|
|
- int pin, ret;
|
|
-
|
|
- pin = pca953x_acpi_find_pin(dev);
|
|
- if (pin < 0)
|
|
- return pin;
|
|
-
|
|
- dev_info(dev, "Applying ACPI interrupt quirk (GPIO %d)\n", pin);
|
|
-
|
|
- if (!gpio_is_valid(pin))
|
|
- return -EINVAL;
|
|
-
|
|
- ret = gpio_request(pin, "pca953x interrupt");
|
|
- if (ret)
|
|
- return ret;
|
|
-
|
|
- ret = gpio_to_irq(pin);
|
|
-
|
|
- /* When pin is used as an IRQ, no need to keep it requested */
|
|
- gpio_free(pin);
|
|
-
|
|
- return ret;
|
|
-}
|
|
#endif
|
|
|
|
static const struct acpi_device_id pca953x_acpi_ids[] = {
|
|
diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
|
|
index e37a57d0a2f07..495f779b2ab99 100644
|
|
--- a/drivers/gpio/gpiolib-acpi.c
|
|
+++ b/drivers/gpio/gpiolib-acpi.c
|
|
@@ -677,6 +677,7 @@ static int acpi_populate_gpio_lookup(struct acpi_resource *ares, void *data)
|
|
if (!lookup->desc) {
|
|
const struct acpi_resource_gpio *agpio = &ares->data.gpio;
|
|
bool gpioint = agpio->connection_type == ACPI_RESOURCE_GPIO_TYPE_INT;
|
|
+ struct gpio_desc *desc;
|
|
u16 pin_index;
|
|
|
|
if (lookup->info.quirks & ACPI_GPIO_QUIRK_ONLY_GPIOIO && gpioint)
|
|
@@ -689,8 +690,12 @@ static int acpi_populate_gpio_lookup(struct acpi_resource *ares, void *data)
|
|
if (pin_index >= agpio->pin_table_length)
|
|
return 1;
|
|
|
|
- lookup->desc = acpi_get_gpiod(agpio->resource_source.string_ptr,
|
|
+ if (lookup->info.quirks & ACPI_GPIO_QUIRK_ABSOLUTE_NUMBER)
|
|
+ desc = gpio_to_desc(agpio->pin_table[pin_index]);
|
|
+ else
|
|
+ desc = acpi_get_gpiod(agpio->resource_source.string_ptr,
|
|
agpio->pin_table[pin_index]);
|
|
+ lookup->desc = desc;
|
|
lookup->info.pin_config = agpio->pin_config;
|
|
lookup->info.debounce = agpio->debounce_timeout;
|
|
lookup->info.gpioint = gpioint;
|
|
@@ -940,8 +945,9 @@ struct gpio_desc *acpi_node_get_gpiod(struct fwnode_handle *fwnode,
|
|
}
|
|
|
|
/**
|
|
- * acpi_dev_gpio_irq_get() - Find GpioInt and translate it to Linux IRQ number
|
|
+ * acpi_dev_gpio_irq_get_by() - Find GpioInt and translate it to Linux IRQ number
|
|
* @adev: pointer to a ACPI device to get IRQ from
|
|
+ * @name: optional name of GpioInt resource
|
|
* @index: index of GpioInt resource (starting from %0)
|
|
*
|
|
* If the device has one or more GpioInt resources, this function can be
|
|
@@ -951,9 +957,12 @@ struct gpio_desc *acpi_node_get_gpiod(struct fwnode_handle *fwnode,
|
|
* The function is idempotent, though each time it runs it will configure GPIO
|
|
* pin direction according to the flags in GpioInt resource.
|
|
*
|
|
+ * The function takes optional @name parameter. If the resource has a property
|
|
+ * name, then only those will be taken into account.
|
|
+ *
|
|
* Return: Linux IRQ number (> %0) on success, negative errno on failure.
|
|
*/
|
|
-int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
|
|
+int acpi_dev_gpio_irq_get_by(struct acpi_device *adev, const char *name, int index)
|
|
{
|
|
int idx, i;
|
|
unsigned int irq_flags;
|
|
@@ -963,7 +972,7 @@ int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
|
|
struct acpi_gpio_info info;
|
|
struct gpio_desc *desc;
|
|
|
|
- desc = acpi_get_gpiod_by_index(adev, NULL, i, &info);
|
|
+ desc = acpi_get_gpiod_by_index(adev, name, i, &info);
|
|
|
|
/* Ignore -EPROBE_DEFER, it only matters if idx matches */
|
|
if (IS_ERR(desc) && PTR_ERR(desc) != -EPROBE_DEFER)
|
|
@@ -1008,7 +1017,7 @@ int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
|
|
}
|
|
return -ENOENT;
|
|
}
|
|
-EXPORT_SYMBOL_GPL(acpi_dev_gpio_irq_get);
|
|
+EXPORT_SYMBOL_GPL(acpi_dev_gpio_irq_get_by);
|
|
|
|
static acpi_status
|
|
acpi_gpio_adr_space_handler(u32 function, acpi_physical_address address,
|
|
diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
|
|
index 97eec8d8dbdc4..e4cfa27f6893d 100644
|
|
--- a/drivers/gpio/gpiolib.c
|
|
+++ b/drivers/gpio/gpiolib.c
|
|
@@ -473,8 +473,12 @@ EXPORT_SYMBOL_GPL(gpiochip_line_is_valid);
|
|
static void gpiodevice_release(struct device *dev)
|
|
{
|
|
struct gpio_device *gdev = dev_get_drvdata(dev);
|
|
+ unsigned long flags;
|
|
|
|
+ spin_lock_irqsave(&gpio_lock, flags);
|
|
list_del(&gdev->list);
|
|
+ spin_unlock_irqrestore(&gpio_lock, flags);
|
|
+
|
|
ida_free(&gpio_ida, gdev->id);
|
|
kfree_const(gdev->label);
|
|
kfree(gdev->descs);
|
|
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
|
|
index 37fb846af4888..ccdf508aca471 100644
|
|
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
|
|
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
|
|
@@ -179,6 +179,7 @@ extern uint amdgpu_smu_memory_pool_size;
|
|
extern uint amdgpu_dc_feature_mask;
|
|
extern uint amdgpu_dc_debug_mask;
|
|
extern uint amdgpu_dm_abm_level;
|
|
+extern int amdgpu_backlight;
|
|
extern struct amdgpu_mgpu_info mgpu_info;
|
|
extern int amdgpu_ras_enable;
|
|
extern uint amdgpu_ras_mask;
|
|
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
|
|
index 36a741d63ddcf..2e9b16fb3fcd1 100644
|
|
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
|
|
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
|
|
@@ -903,7 +903,7 @@ void amdgpu_acpi_fini(struct amdgpu_device *adev)
|
|
*/
|
|
bool amdgpu_acpi_is_s0ix_supported(struct amdgpu_device *adev)
|
|
{
|
|
-#if defined(CONFIG_AMD_PMC)
|
|
+#if defined(CONFIG_AMD_PMC) || defined(CONFIG_AMD_PMC_MODULE)
|
|
if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) {
|
|
if (adev->flags & AMD_IS_APU)
|
|
return true;
|
|
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
|
|
index 0ffea970d0179..1aed641a3eecc 100644
|
|
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
|
|
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
|
|
@@ -777,6 +777,10 @@ uint amdgpu_dm_abm_level;
|
|
MODULE_PARM_DESC(abmlevel, "ABM level (0 = off (default), 1-4 = backlight reduction level) ");
|
|
module_param_named(abmlevel, amdgpu_dm_abm_level, uint, 0444);
|
|
|
|
+int amdgpu_backlight = -1;
|
|
+MODULE_PARM_DESC(backlight, "Backlight control (0 = pwm, 1 = aux, -1 auto (default))");
|
|
+module_param_named(backlight, amdgpu_backlight, bint, 0444);
|
|
+
|
|
/**
|
|
* DOC: tmz (int)
|
|
* Trusted Memory Zone (TMZ) is a method to protect data being written
|
|
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
|
|
index 947cd923fb4c3..1d26e82602f75 100644
|
|
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
|
|
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
|
|
@@ -2209,6 +2209,11 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
|
|
caps->ext_caps->bits.hdr_aux_backlight_control == 1)
|
|
caps->aux_support = true;
|
|
|
|
+ if (amdgpu_backlight == 0)
|
|
+ caps->aux_support = false;
|
|
+ else if (amdgpu_backlight == 1)
|
|
+ caps->aux_support = true;
|
|
+
|
|
/* From the specification (CTA-861-G), for calculating the maximum
|
|
* luminance we need to use:
|
|
* Luminance = 50*2**(CV/32)
|
|
@@ -3127,19 +3132,6 @@ static void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm)
|
|
#endif
|
|
}
|
|
|
|
-static int set_backlight_via_aux(struct dc_link *link, uint32_t brightness)
|
|
-{
|
|
- bool rc;
|
|
-
|
|
- if (!link)
|
|
- return 1;
|
|
-
|
|
- rc = dc_link_set_backlight_level_nits(link, true, brightness,
|
|
- AUX_BL_DEFAULT_TRANSITION_TIME_MS);
|
|
-
|
|
- return rc ? 0 : 1;
|
|
-}
|
|
-
|
|
static int get_brightness_range(const struct amdgpu_dm_backlight_caps *caps,
|
|
unsigned *min, unsigned *max)
|
|
{
|
|
@@ -3202,9 +3194,10 @@ static int amdgpu_dm_backlight_update_status(struct backlight_device *bd)
|
|
brightness = convert_brightness_from_user(&caps, bd->props.brightness);
|
|
// Change brightness based on AUX property
|
|
if (caps.aux_support)
|
|
- return set_backlight_via_aux(link, brightness);
|
|
-
|
|
- rc = dc_link_set_backlight_level(dm->backlight_link, brightness, 0);
|
|
+ rc = dc_link_set_backlight_level_nits(link, true, brightness,
|
|
+ AUX_BL_DEFAULT_TRANSITION_TIME_MS);
|
|
+ else
|
|
+ rc = dc_link_set_backlight_level(dm->backlight_link, brightness, 0);
|
|
|
|
return rc ? 0 : 1;
|
|
}
|
|
@@ -3212,11 +3205,27 @@ static int amdgpu_dm_backlight_update_status(struct backlight_device *bd)
|
|
static int amdgpu_dm_backlight_get_brightness(struct backlight_device *bd)
|
|
{
|
|
struct amdgpu_display_manager *dm = bl_get_data(bd);
|
|
- int ret = dc_link_get_backlight_level(dm->backlight_link);
|
|
+ struct amdgpu_dm_backlight_caps caps;
|
|
+
|
|
+ amdgpu_dm_update_backlight_caps(dm);
|
|
+ caps = dm->backlight_caps;
|
|
+
|
|
+ if (caps.aux_support) {
|
|
+ struct dc_link *link = (struct dc_link *)dm->backlight_link;
|
|
+ u32 avg, peak;
|
|
+ bool rc;
|
|
|
|
- if (ret == DC_ERROR_UNEXPECTED)
|
|
- return bd->props.brightness;
|
|
- return convert_brightness_to_user(&dm->backlight_caps, ret);
|
|
+ rc = dc_link_get_backlight_level_nits(link, &avg, &peak);
|
|
+ if (!rc)
|
|
+ return bd->props.brightness;
|
|
+ return convert_brightness_to_user(&caps, avg);
|
|
+ } else {
|
|
+ int ret = dc_link_get_backlight_level(dm->backlight_link);
|
|
+
|
|
+ if (ret == DC_ERROR_UNEXPECTED)
|
|
+ return bd->props.brightness;
|
|
+ return convert_brightness_to_user(&caps, ret);
|
|
+ }
|
|
}
|
|
|
|
static const struct backlight_ops amdgpu_dm_backlight_ops = {
|
|
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
|
|
index 278ade3a90ccf..32cb5ce8bcd0d 100644
|
|
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
|
|
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
|
|
@@ -2571,7 +2571,6 @@ bool dc_link_set_backlight_level(const struct dc_link *link,
|
|
if (pipe_ctx->plane_state == NULL)
|
|
frame_ramp = 0;
|
|
} else {
|
|
- ASSERT(false);
|
|
return false;
|
|
}
|
|
|
|
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
|
|
index 072f8c8809243..94ee2cab26b7c 100644
|
|
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
|
|
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
|
|
@@ -1062,8 +1062,6 @@ static void patch_bounding_box(struct dc *dc, struct _vcs_dpi_soc_bounding_box_s
|
|
{
|
|
int i;
|
|
|
|
- DC_FP_START();
|
|
-
|
|
if (dc->bb_overrides.sr_exit_time_ns) {
|
|
for (i = 0; i < WM_SET_COUNT; i++) {
|
|
dc->clk_mgr->bw_params->wm_table.entries[i].sr_exit_time_us =
|
|
@@ -1088,8 +1086,6 @@ static void patch_bounding_box(struct dc *dc, struct _vcs_dpi_soc_bounding_box_s
|
|
dc->bb_overrides.dram_clock_change_latency_ns / 1000.0;
|
|
}
|
|
}
|
|
-
|
|
- DC_FP_END();
|
|
}
|
|
|
|
void dcn21_calculate_wm(
|
|
@@ -1339,7 +1335,7 @@ static noinline bool dcn21_validate_bandwidth_fp(struct dc *dc,
|
|
int vlevel = 0;
|
|
int pipe_split_from[MAX_PIPES];
|
|
int pipe_cnt = 0;
|
|
- display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_KERNEL);
|
|
+ display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_ATOMIC);
|
|
DC_LOGGER_INIT(dc->ctx->logger);
|
|
|
|
BW_VAL_TRACE_COUNT();
|
|
diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
|
|
index 82676c086ce46..d7794370cb5a1 100644
|
|
--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
|
|
+++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
|
|
@@ -5216,10 +5216,10 @@ static int smu7_set_watermarks_for_clocks_ranges(struct pp_hwmgr *hwmgr,
|
|
for (j = 0; j < dep_sclk_table->count; j++) {
|
|
valid_entry = false;
|
|
for (k = 0; k < watermarks->num_wm_sets; k++) {
|
|
- if (dep_sclk_table->entries[i].clk / 10 >= watermarks->wm_clk_ranges[k].wm_min_eng_clk_in_khz &&
|
|
- dep_sclk_table->entries[i].clk / 10 < watermarks->wm_clk_ranges[k].wm_max_eng_clk_in_khz &&
|
|
- dep_mclk_table->entries[i].clk / 10 >= watermarks->wm_clk_ranges[k].wm_min_mem_clk_in_khz &&
|
|
- dep_mclk_table->entries[i].clk / 10 < watermarks->wm_clk_ranges[k].wm_max_mem_clk_in_khz) {
|
|
+ if (dep_sclk_table->entries[i].clk >= watermarks->wm_clk_ranges[k].wm_min_eng_clk_in_khz / 10 &&
|
|
+ dep_sclk_table->entries[i].clk < watermarks->wm_clk_ranges[k].wm_max_eng_clk_in_khz / 10 &&
|
|
+ dep_mclk_table->entries[i].clk >= watermarks->wm_clk_ranges[k].wm_min_mem_clk_in_khz / 10 &&
|
|
+ dep_mclk_table->entries[i].clk < watermarks->wm_clk_ranges[k].wm_max_mem_clk_in_khz / 10) {
|
|
valid_entry = true;
|
|
table->DisplayWatermark[i][j] = watermarks->wm_clk_ranges[k].wm_set_id;
|
|
break;
|
|
diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
|
|
index 1b47f94e03317..c7a01ea9ed647 100644
|
|
--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
|
|
+++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
|
|
@@ -1506,6 +1506,48 @@ static int vega10_populate_single_lclk_level(struct pp_hwmgr *hwmgr,
|
|
return 0;
|
|
}
|
|
|
|
+static int vega10_override_pcie_parameters(struct pp_hwmgr *hwmgr)
|
|
+{
|
|
+ struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev);
|
|
+ struct vega10_hwmgr *data =
|
|
+ (struct vega10_hwmgr *)(hwmgr->backend);
|
|
+ uint32_t pcie_gen = 0, pcie_width = 0;
|
|
+ PPTable_t *pp_table = &(data->smc_state_table.pp_table);
|
|
+ int i;
|
|
+
|
|
+ if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4)
|
|
+ pcie_gen = 3;
|
|
+ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
|
|
+ pcie_gen = 2;
|
|
+ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2)
|
|
+ pcie_gen = 1;
|
|
+ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1)
|
|
+ pcie_gen = 0;
|
|
+
|
|
+ if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X16)
|
|
+ pcie_width = 6;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X12)
|
|
+ pcie_width = 5;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X8)
|
|
+ pcie_width = 4;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X4)
|
|
+ pcie_width = 3;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X2)
|
|
+ pcie_width = 2;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X1)
|
|
+ pcie_width = 1;
|
|
+
|
|
+ for (i = 0; i < NUM_LINK_LEVELS; i++) {
|
|
+ if (pp_table->PcieGenSpeed[i] > pcie_gen)
|
|
+ pp_table->PcieGenSpeed[i] = pcie_gen;
|
|
+
|
|
+ if (pp_table->PcieLaneCount[i] > pcie_width)
|
|
+ pp_table->PcieLaneCount[i] = pcie_width;
|
|
+ }
|
|
+
|
|
+ return 0;
|
|
+}
|
|
+
|
|
static int vega10_populate_smc_link_levels(struct pp_hwmgr *hwmgr)
|
|
{
|
|
int result = -1;
|
|
@@ -2557,6 +2599,11 @@ static int vega10_init_smc_table(struct pp_hwmgr *hwmgr)
|
|
"Failed to initialize Link Level!",
|
|
return result);
|
|
|
|
+ result = vega10_override_pcie_parameters(hwmgr);
|
|
+ PP_ASSERT_WITH_CODE(!result,
|
|
+ "Failed to override pcie parameters!",
|
|
+ return result);
|
|
+
|
|
result = vega10_populate_all_graphic_levels(hwmgr);
|
|
PP_ASSERT_WITH_CODE(!result,
|
|
"Failed to initialize Graphics Level!",
|
|
@@ -2923,6 +2970,7 @@ static int vega10_start_dpm(struct pp_hwmgr *hwmgr, uint32_t bitmap)
|
|
return 0;
|
|
}
|
|
|
|
+
|
|
static int vega10_enable_disable_PCC_limit_feature(struct pp_hwmgr *hwmgr, bool enable)
|
|
{
|
|
struct vega10_hwmgr *data = hwmgr->backend;
|
|
diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
|
|
index dc206fa88c5e5..62076035029ac 100644
|
|
--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
|
|
+++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
|
|
@@ -481,6 +481,67 @@ static void vega12_init_dpm_state(struct vega12_dpm_state *dpm_state)
|
|
dpm_state->hard_max_level = 0xffff;
|
|
}
|
|
|
|
+static int vega12_override_pcie_parameters(struct pp_hwmgr *hwmgr)
|
|
+{
|
|
+ struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev);
|
|
+ struct vega12_hwmgr *data =
|
|
+ (struct vega12_hwmgr *)(hwmgr->backend);
|
|
+ uint32_t pcie_gen = 0, pcie_width = 0, smu_pcie_arg, pcie_gen_arg, pcie_width_arg;
|
|
+ PPTable_t *pp_table = &(data->smc_state_table.pp_table);
|
|
+ int i;
|
|
+ int ret;
|
|
+
|
|
+ if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4)
|
|
+ pcie_gen = 3;
|
|
+ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
|
|
+ pcie_gen = 2;
|
|
+ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2)
|
|
+ pcie_gen = 1;
|
|
+ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1)
|
|
+ pcie_gen = 0;
|
|
+
|
|
+ if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X16)
|
|
+ pcie_width = 6;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X12)
|
|
+ pcie_width = 5;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X8)
|
|
+ pcie_width = 4;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X4)
|
|
+ pcie_width = 3;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X2)
|
|
+ pcie_width = 2;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X1)
|
|
+ pcie_width = 1;
|
|
+
|
|
+ /* Bit 31:16: LCLK DPM level. 0 is DPM0, and 1 is DPM1
|
|
+ * Bit 15:8: PCIE GEN, 0 to 3 corresponds to GEN1 to GEN4
|
|
+ * Bit 7:0: PCIE lane width, 1 to 7 corresponds is x1 to x32
|
|
+ */
|
|
+ for (i = 0; i < NUM_LINK_LEVELS; i++) {
|
|
+ pcie_gen_arg = (pp_table->PcieGenSpeed[i] > pcie_gen) ? pcie_gen :
|
|
+ pp_table->PcieGenSpeed[i];
|
|
+ pcie_width_arg = (pp_table->PcieLaneCount[i] > pcie_width) ? pcie_width :
|
|
+ pp_table->PcieLaneCount[i];
|
|
+
|
|
+ if (pcie_gen_arg != pp_table->PcieGenSpeed[i] || pcie_width_arg !=
|
|
+ pp_table->PcieLaneCount[i]) {
|
|
+ smu_pcie_arg = (i << 16) | (pcie_gen_arg << 8) | pcie_width_arg;
|
|
+ ret = smum_send_msg_to_smc_with_parameter(hwmgr,
|
|
+ PPSMC_MSG_OverridePcieParameters, smu_pcie_arg,
|
|
+ NULL);
|
|
+ PP_ASSERT_WITH_CODE(!ret,
|
|
+ "[OverridePcieParameters] Attempt to override pcie params failed!",
|
|
+ return ret);
|
|
+ }
|
|
+
|
|
+ /* update the pptable */
|
|
+ pp_table->PcieGenSpeed[i] = pcie_gen_arg;
|
|
+ pp_table->PcieLaneCount[i] = pcie_width_arg;
|
|
+ }
|
|
+
|
|
+ return 0;
|
|
+}
|
|
+
|
|
static int vega12_get_number_of_dpm_level(struct pp_hwmgr *hwmgr,
|
|
PPCLK_e clk_id, uint32_t *num_of_levels)
|
|
{
|
|
@@ -969,6 +1030,11 @@ static int vega12_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
|
|
"Failed to enable all smu features!",
|
|
return result);
|
|
|
|
+ result = vega12_override_pcie_parameters(hwmgr);
|
|
+ PP_ASSERT_WITH_CODE(!result,
|
|
+ "[EnableDPMTasks] Failed to override pcie parameters!",
|
|
+ return result);
|
|
+
|
|
tmp_result = vega12_power_control_set_level(hwmgr);
|
|
PP_ASSERT_WITH_CODE(!tmp_result,
|
|
"Failed to power control set level!",
|
|
diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
|
|
index da84012b7fd51..251979c059c8b 100644
|
|
--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
|
|
+++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
|
|
@@ -832,7 +832,9 @@ static int vega20_override_pcie_parameters(struct pp_hwmgr *hwmgr)
|
|
struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev);
|
|
struct vega20_hwmgr *data =
|
|
(struct vega20_hwmgr *)(hwmgr->backend);
|
|
- uint32_t pcie_gen = 0, pcie_width = 0, smu_pcie_arg;
|
|
+ uint32_t pcie_gen = 0, pcie_width = 0, smu_pcie_arg, pcie_gen_arg, pcie_width_arg;
|
|
+ PPTable_t *pp_table = &(data->smc_state_table.pp_table);
|
|
+ int i;
|
|
int ret;
|
|
|
|
if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4)
|
|
@@ -861,17 +863,27 @@ static int vega20_override_pcie_parameters(struct pp_hwmgr *hwmgr)
|
|
* Bit 15:8: PCIE GEN, 0 to 3 corresponds to GEN1 to GEN4
|
|
* Bit 7:0: PCIE lane width, 1 to 7 corresponds is x1 to x32
|
|
*/
|
|
- smu_pcie_arg = (1 << 16) | (pcie_gen << 8) | pcie_width;
|
|
- ret = smum_send_msg_to_smc_with_parameter(hwmgr,
|
|
- PPSMC_MSG_OverridePcieParameters, smu_pcie_arg,
|
|
- NULL);
|
|
- PP_ASSERT_WITH_CODE(!ret,
|
|
- "[OverridePcieParameters] Attempt to override pcie params failed!",
|
|
- return ret);
|
|
+ for (i = 0; i < NUM_LINK_LEVELS; i++) {
|
|
+ pcie_gen_arg = (pp_table->PcieGenSpeed[i] > pcie_gen) ? pcie_gen :
|
|
+ pp_table->PcieGenSpeed[i];
|
|
+ pcie_width_arg = (pp_table->PcieLaneCount[i] > pcie_width) ? pcie_width :
|
|
+ pp_table->PcieLaneCount[i];
|
|
+
|
|
+ if (pcie_gen_arg != pp_table->PcieGenSpeed[i] || pcie_width_arg !=
|
|
+ pp_table->PcieLaneCount[i]) {
|
|
+ smu_pcie_arg = (i << 16) | (pcie_gen_arg << 8) | pcie_width_arg;
|
|
+ ret = smum_send_msg_to_smc_with_parameter(hwmgr,
|
|
+ PPSMC_MSG_OverridePcieParameters, smu_pcie_arg,
|
|
+ NULL);
|
|
+ PP_ASSERT_WITH_CODE(!ret,
|
|
+ "[OverridePcieParameters] Attempt to override pcie params failed!",
|
|
+ return ret);
|
|
+ }
|
|
|
|
- data->pcie_parameters_override = true;
|
|
- data->pcie_gen_level1 = pcie_gen;
|
|
- data->pcie_width_level1 = pcie_width;
|
|
+ /* update the pptable */
|
|
+ pp_table->PcieGenSpeed[i] = pcie_gen_arg;
|
|
+ pp_table->PcieLaneCount[i] = pcie_width_arg;
|
|
+ }
|
|
|
|
return 0;
|
|
}
|
|
@@ -3320,9 +3332,7 @@ static int vega20_print_clock_levels(struct pp_hwmgr *hwmgr,
|
|
data->od8_settings.od8_settings_array;
|
|
OverDriveTable_t *od_table =
|
|
&(data->smc_state_table.overdrive_table);
|
|
- struct phm_ppt_v3_information *pptable_information =
|
|
- (struct phm_ppt_v3_information *)hwmgr->pptable;
|
|
- PPTable_t *pptable = (PPTable_t *)pptable_information->smc_pptable;
|
|
+ PPTable_t *pptable = &(data->smc_state_table.pp_table);
|
|
struct pp_clock_levels_with_latency clocks;
|
|
struct vega20_single_dpm_table *fclk_dpm_table =
|
|
&(data->dpm_table.fclk_table);
|
|
@@ -3421,13 +3431,9 @@ static int vega20_print_clock_levels(struct pp_hwmgr *hwmgr,
|
|
current_lane_width =
|
|
vega20_get_current_pcie_link_width_level(hwmgr);
|
|
for (i = 0; i < NUM_LINK_LEVELS; i++) {
|
|
- if (i == 1 && data->pcie_parameters_override) {
|
|
- gen_speed = data->pcie_gen_level1;
|
|
- lane_width = data->pcie_width_level1;
|
|
- } else {
|
|
- gen_speed = pptable->PcieGenSpeed[i];
|
|
- lane_width = pptable->PcieLaneCount[i];
|
|
- }
|
|
+ gen_speed = pptable->PcieGenSpeed[i];
|
|
+ lane_width = pptable->PcieLaneCount[i];
|
|
+
|
|
size += sprintf(buf + size, "%d: %s %s %dMhz %s\n", i,
|
|
(gen_speed == 0) ? "2.5GT/s," :
|
|
(gen_speed == 1) ? "5.0GT/s," :
|
|
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
|
|
index e82db0f4e7715..080fd437fd43c 100644
|
|
--- a/drivers/gpu/drm/drm_fb_helper.c
|
|
+++ b/drivers/gpu/drm/drm_fb_helper.c
|
|
@@ -2043,7 +2043,7 @@ static void drm_fbdev_cleanup(struct drm_fb_helper *fb_helper)
|
|
|
|
if (shadow)
|
|
vfree(shadow);
|
|
- else
|
|
+ else if (fb_helper->buffer)
|
|
drm_client_buffer_vunmap(fb_helper->buffer);
|
|
|
|
drm_client_framebuffer_delete(fb_helper->buffer);
|
|
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
|
|
index 9825c378dfa6d..6d625cee7a6af 100644
|
|
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
|
|
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
|
|
@@ -357,13 +357,14 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
|
|
if (--shmem->vmap_use_count > 0)
|
|
return;
|
|
|
|
- if (obj->import_attach)
|
|
+ if (obj->import_attach) {
|
|
dma_buf_vunmap(obj->import_attach->dmabuf, map);
|
|
- else
|
|
+ } else {
|
|
vunmap(shmem->vaddr);
|
|
+ drm_gem_shmem_put_pages(shmem);
|
|
+ }
|
|
|
|
shmem->vaddr = NULL;
|
|
- drm_gem_shmem_put_pages(shmem);
|
|
}
|
|
|
|
/*
|
|
@@ -525,14 +526,28 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
|
|
struct drm_gem_object *obj = vma->vm_private_data;
|
|
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
|
|
loff_t num_pages = obj->size >> PAGE_SHIFT;
|
|
+ vm_fault_t ret;
|
|
struct page *page;
|
|
+ pgoff_t page_offset;
|
|
|
|
- if (vmf->pgoff >= num_pages || WARN_ON_ONCE(!shmem->pages))
|
|
- return VM_FAULT_SIGBUS;
|
|
+ /* We don't use vmf->pgoff since that has the fake offset */
|
|
+ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
|
|
|
|
- page = shmem->pages[vmf->pgoff];
|
|
+ mutex_lock(&shmem->pages_lock);
|
|
|
|
- return vmf_insert_page(vma, vmf->address, page);
|
|
+ if (page_offset >= num_pages ||
|
|
+ WARN_ON_ONCE(!shmem->pages) ||
|
|
+ shmem->madv < 0) {
|
|
+ ret = VM_FAULT_SIGBUS;
|
|
+ } else {
|
|
+ page = shmem->pages[page_offset];
|
|
+
|
|
+ ret = vmf_insert_page(vma, vmf->address, page);
|
|
+ }
|
|
+
|
|
+ mutex_unlock(&shmem->pages_lock);
|
|
+
|
|
+ return ret;
|
|
}
|
|
|
|
static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
|
|
@@ -581,9 +596,6 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
|
|
struct drm_gem_shmem_object *shmem;
|
|
int ret;
|
|
|
|
- /* Remove the fake offset */
|
|
- vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
|
|
-
|
|
if (obj->import_attach) {
|
|
/* Drop the reference drm_gem_mmap_obj() acquired.*/
|
|
drm_gem_object_put(obj);
|
|
diff --git a/drivers/gpu/drm/drm_ioc32.c b/drivers/gpu/drm/drm_ioc32.c
|
|
index f86448ab1fe04..dc734d4828a17 100644
|
|
--- a/drivers/gpu/drm/drm_ioc32.c
|
|
+++ b/drivers/gpu/drm/drm_ioc32.c
|
|
@@ -99,6 +99,8 @@ static int compat_drm_version(struct file *file, unsigned int cmd,
|
|
if (copy_from_user(&v32, (void __user *)arg, sizeof(v32)))
|
|
return -EFAULT;
|
|
|
|
+ memset(&v, 0, sizeof(v));
|
|
+
|
|
v = (struct drm_version) {
|
|
.name_len = v32.name_len,
|
|
.name = compat_ptr(v32.name),
|
|
@@ -137,6 +139,9 @@ static int compat_drm_getunique(struct file *file, unsigned int cmd,
|
|
|
|
if (copy_from_user(&uq32, (void __user *)arg, sizeof(uq32)))
|
|
return -EFAULT;
|
|
+
|
|
+ memset(&uq, 0, sizeof(uq));
|
|
+
|
|
uq = (struct drm_unique){
|
|
.unique_len = uq32.unique_len,
|
|
.unique = compat_ptr(uq32.unique),
|
|
@@ -265,6 +270,8 @@ static int compat_drm_getclient(struct file *file, unsigned int cmd,
|
|
if (copy_from_user(&c32, argp, sizeof(c32)))
|
|
return -EFAULT;
|
|
|
|
+ memset(&client, 0, sizeof(client));
|
|
+
|
|
client.idx = c32.idx;
|
|
|
|
err = drm_ioctl_kernel(file, drm_getclient, &client, 0);
|
|
@@ -852,6 +859,8 @@ static int compat_drm_wait_vblank(struct file *file, unsigned int cmd,
|
|
if (copy_from_user(&req32, argp, sizeof(req32)))
|
|
return -EFAULT;
|
|
|
|
+ memset(&req, 0, sizeof(req));
|
|
+
|
|
req.request.type = req32.request.type;
|
|
req.request.sequence = req32.request.sequence;
|
|
req.request.signal = req32.request.signal;
|
|
@@ -889,6 +898,8 @@ static int compat_drm_mode_addfb2(struct file *file, unsigned int cmd,
|
|
struct drm_mode_fb_cmd2 req64;
|
|
int err;
|
|
|
|
+ memset(&req64, 0, sizeof(req64));
|
|
+
|
|
if (copy_from_user(&req64, argp,
|
|
offsetof(drm_mode_fb_cmd232_t, modifier)))
|
|
return -EFAULT;
|
|
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
|
|
index 0b31670343f5a..346aa2057bad0 100644
|
|
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
|
|
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
|
|
@@ -709,9 +709,12 @@ static int engine_setup_common(struct intel_engine_cs *engine)
|
|
goto err_status;
|
|
}
|
|
|
|
+ err = intel_engine_init_cmd_parser(engine);
|
|
+ if (err)
|
|
+ goto err_cmd_parser;
|
|
+
|
|
intel_engine_init_active(engine, ENGINE_PHYSICAL);
|
|
intel_engine_init_execlists(engine);
|
|
- intel_engine_init_cmd_parser(engine);
|
|
intel_engine_init__pm(engine);
|
|
intel_engine_init_retire(engine);
|
|
|
|
@@ -725,6 +728,8 @@ static int engine_setup_common(struct intel_engine_cs *engine)
|
|
|
|
return 0;
|
|
|
|
+err_cmd_parser:
|
|
+ intel_breadcrumbs_free(engine->breadcrumbs);
|
|
err_status:
|
|
cleanup_status_page(engine);
|
|
return err;
|
|
diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
|
|
index b0899b665e852..da1d6d58fc429 100644
|
|
--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
|
|
+++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
|
|
@@ -939,7 +939,7 @@ static void fini_hash_table(struct intel_engine_cs *engine)
|
|
* struct intel_engine_cs based on whether the platform requires software
|
|
* command parsing.
|
|
*/
|
|
-void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
|
|
+int intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
|
|
{
|
|
const struct drm_i915_cmd_table *cmd_tables;
|
|
int cmd_table_count;
|
|
@@ -947,7 +947,7 @@ void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
|
|
|
|
if (!IS_GEN(engine->i915, 7) && !(IS_GEN(engine->i915, 9) &&
|
|
engine->class == COPY_ENGINE_CLASS))
|
|
- return;
|
|
+ return 0;
|
|
|
|
switch (engine->class) {
|
|
case RENDER_CLASS:
|
|
@@ -1012,19 +1012,19 @@ void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
|
|
break;
|
|
default:
|
|
MISSING_CASE(engine->class);
|
|
- return;
|
|
+ goto out;
|
|
}
|
|
|
|
if (!validate_cmds_sorted(engine, cmd_tables, cmd_table_count)) {
|
|
drm_err(&engine->i915->drm,
|
|
"%s: command descriptions are not sorted\n",
|
|
engine->name);
|
|
- return;
|
|
+ goto out;
|
|
}
|
|
if (!validate_regs_sorted(engine)) {
|
|
drm_err(&engine->i915->drm,
|
|
"%s: registers are not sorted\n", engine->name);
|
|
- return;
|
|
+ goto out;
|
|
}
|
|
|
|
ret = init_hash_table(engine, cmd_tables, cmd_table_count);
|
|
@@ -1032,10 +1032,17 @@ void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
|
|
drm_err(&engine->i915->drm,
|
|
"%s: initialised failed!\n", engine->name);
|
|
fini_hash_table(engine);
|
|
- return;
|
|
+ goto out;
|
|
}
|
|
|
|
engine->flags |= I915_ENGINE_USING_CMD_PARSER;
|
|
+
|
|
+out:
|
|
+ if (intel_engine_requires_cmd_parser(engine) &&
|
|
+ !intel_engine_using_cmd_parser(engine))
|
|
+ return -EINVAL;
|
|
+
|
|
+ return 0;
|
|
}
|
|
|
|
/**
|
|
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
|
|
index c6964f82a1bb6..bd5f76a28d68d 100644
|
|
--- a/drivers/gpu/drm/i915/i915_drv.h
|
|
+++ b/drivers/gpu/drm/i915/i915_drv.h
|
|
@@ -1947,7 +1947,7 @@ const char *i915_cache_level_str(struct drm_i915_private *i915, int type);
|
|
|
|
/* i915_cmd_parser.c */
|
|
int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv);
|
|
-void intel_engine_init_cmd_parser(struct intel_engine_cs *engine);
|
|
+int intel_engine_init_cmd_parser(struct intel_engine_cs *engine);
|
|
void intel_engine_cleanup_cmd_parser(struct intel_engine_cs *engine);
|
|
int intel_engine_cmd_parser(struct intel_engine_cs *engine,
|
|
struct i915_vma *batch,
|
|
diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
|
|
index 42c5d3246cfcb..453d8b4c5763d 100644
|
|
--- a/drivers/gpu/drm/meson/meson_drv.c
|
|
+++ b/drivers/gpu/drm/meson/meson_drv.c
|
|
@@ -482,6 +482,16 @@ static int meson_probe_remote(struct platform_device *pdev,
|
|
return count;
|
|
}
|
|
|
|
+static void meson_drv_shutdown(struct platform_device *pdev)
|
|
+{
|
|
+ struct meson_drm *priv = dev_get_drvdata(&pdev->dev);
|
|
+ struct drm_device *drm = priv->drm;
|
|
+
|
|
+ DRM_DEBUG_DRIVER("\n");
|
|
+ drm_kms_helper_poll_fini(drm);
|
|
+ drm_atomic_helper_shutdown(drm);
|
|
+}
|
|
+
|
|
static int meson_drv_probe(struct platform_device *pdev)
|
|
{
|
|
struct component_match *match = NULL;
|
|
@@ -553,6 +563,7 @@ static const struct dev_pm_ops meson_drv_pm_ops = {
|
|
|
|
static struct platform_driver meson_drm_platform_driver = {
|
|
.probe = meson_drv_probe,
|
|
+ .shutdown = meson_drv_shutdown,
|
|
.driver = {
|
|
.name = "meson-drm",
|
|
.of_match_table = dt_match,
|
|
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
|
|
index 7ea367a5444dd..f1c9a22083beb 100644
|
|
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
|
|
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
|
|
@@ -556,7 +556,8 @@ nouveau_bo_sync_for_device(struct nouveau_bo *nvbo)
|
|
if (nvbo->force_coherent)
|
|
return;
|
|
|
|
- for (i = 0; i < ttm_dma->num_pages; ++i) {
|
|
+ i = 0;
|
|
+ while (i < ttm_dma->num_pages) {
|
|
struct page *p = ttm_dma->pages[i];
|
|
size_t num_pages = 1;
|
|
|
|
@@ -587,7 +588,8 @@ nouveau_bo_sync_for_cpu(struct nouveau_bo *nvbo)
|
|
if (nvbo->force_coherent)
|
|
return;
|
|
|
|
- for (i = 0; i < ttm_dma->num_pages; ++i) {
|
|
+ i = 0;
|
|
+ while (i < ttm_dma->num_pages) {
|
|
struct page *p = ttm_dma->pages[i];
|
|
size_t num_pages = 1;
|
|
|
|
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
|
|
index 012bce0cdb65c..10738e04c09b8 100644
|
|
--- a/drivers/gpu/drm/qxl/qxl_display.c
|
|
+++ b/drivers/gpu/drm/qxl/qxl_display.c
|
|
@@ -328,6 +328,7 @@ static void qxl_crtc_update_monitors_config(struct drm_crtc *crtc,
|
|
|
|
head.id = i;
|
|
head.flags = 0;
|
|
+ head.surface_id = 0;
|
|
oldcount = qdev->monitors_config->count;
|
|
if (crtc->state->active) {
|
|
struct drm_display_mode *mode = &crtc->mode;
|
|
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
|
|
index 5f3adba43e478..aa3b589f30a18 100644
|
|
--- a/drivers/gpu/drm/radeon/radeon.h
|
|
+++ b/drivers/gpu/drm/radeon/radeon.h
|
|
@@ -575,6 +575,8 @@ struct radeon_gem {
|
|
struct list_head objects;
|
|
};
|
|
|
|
+extern const struct drm_gem_object_funcs radeon_gem_object_funcs;
|
|
+
|
|
int radeon_gem_init(struct radeon_device *rdev);
|
|
void radeon_gem_fini(struct radeon_device *rdev);
|
|
int radeon_gem_object_create(struct radeon_device *rdev, unsigned long size,
|
|
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
|
|
index b6b21d2e72624..f17f621077deb 100644
|
|
--- a/drivers/gpu/drm/radeon/radeon_gem.c
|
|
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
|
|
@@ -43,7 +43,7 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
|
|
int radeon_gem_prime_pin(struct drm_gem_object *obj);
|
|
void radeon_gem_prime_unpin(struct drm_gem_object *obj);
|
|
|
|
-static const struct drm_gem_object_funcs radeon_gem_object_funcs;
|
|
+const struct drm_gem_object_funcs radeon_gem_object_funcs;
|
|
|
|
static void radeon_gem_object_free(struct drm_gem_object *gobj)
|
|
{
|
|
@@ -227,7 +227,7 @@ static int radeon_gem_handle_lockup(struct radeon_device *rdev, int r)
|
|
return r;
|
|
}
|
|
|
|
-static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
|
|
+const struct drm_gem_object_funcs radeon_gem_object_funcs = {
|
|
.free = radeon_gem_object_free,
|
|
.open = radeon_gem_object_open,
|
|
.close = radeon_gem_object_close,
|
|
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
|
|
index dd482edc819c5..d0ff3ce68a4f5 100644
|
|
--- a/drivers/gpu/drm/radeon/radeon_prime.c
|
|
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
|
|
@@ -56,6 +56,8 @@ struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
|
|
if (ret)
|
|
return ERR_PTR(ret);
|
|
|
|
+ bo->tbo.base.funcs = &radeon_gem_object_funcs;
|
|
+
|
|
mutex_lock(&rdev->gem.mutex);
|
|
list_add_tail(&bo->list, &rdev->gem.objects);
|
|
mutex_unlock(&rdev->gem.mutex);
|
|
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
|
|
index 33f65f4626e5a..23866a54e3f91 100644
|
|
--- a/drivers/gpu/drm/tiny/gm12u320.c
|
|
+++ b/drivers/gpu/drm/tiny/gm12u320.c
|
|
@@ -83,6 +83,7 @@ MODULE_PARM_DESC(eco_mode, "Turn on Eco mode (less bright, more silent)");
|
|
|
|
struct gm12u320_device {
|
|
struct drm_device dev;
|
|
+ struct device *dmadev;
|
|
struct drm_simple_display_pipe pipe;
|
|
struct drm_connector conn;
|
|
unsigned char *cmd_buf;
|
|
@@ -601,6 +602,22 @@ static const uint64_t gm12u320_pipe_modifiers[] = {
|
|
DRM_FORMAT_MOD_INVALID
|
|
};
|
|
|
|
+/*
|
|
+ * FIXME: Dma-buf sharing requires DMA support by the importing device.
|
|
+ * This function is a workaround to make USB devices work as well.
|
|
+ * See todo.rst for how to fix the issue in the dma-buf framework.
|
|
+ */
|
|
+static struct drm_gem_object *gm12u320_gem_prime_import(struct drm_device *dev,
|
|
+ struct dma_buf *dma_buf)
|
|
+{
|
|
+ struct gm12u320_device *gm12u320 = to_gm12u320(dev);
|
|
+
|
|
+ if (!gm12u320->dmadev)
|
|
+ return ERR_PTR(-ENODEV);
|
|
+
|
|
+ return drm_gem_prime_import_dev(dev, dma_buf, gm12u320->dmadev);
|
|
+}
|
|
+
|
|
DEFINE_DRM_GEM_FOPS(gm12u320_fops);
|
|
|
|
static const struct drm_driver gm12u320_drm_driver = {
|
|
@@ -614,6 +631,7 @@ static const struct drm_driver gm12u320_drm_driver = {
|
|
|
|
.fops = &gm12u320_fops,
|
|
DRM_GEM_SHMEM_DRIVER_OPS,
|
|
+ .gem_prime_import = gm12u320_gem_prime_import,
|
|
};
|
|
|
|
static const struct drm_mode_config_funcs gm12u320_mode_config_funcs = {
|
|
@@ -640,15 +658,18 @@ static int gm12u320_usb_probe(struct usb_interface *interface,
|
|
struct gm12u320_device, dev);
|
|
if (IS_ERR(gm12u320))
|
|
return PTR_ERR(gm12u320);
|
|
+ dev = &gm12u320->dev;
|
|
+
|
|
+ gm12u320->dmadev = usb_intf_get_dma_device(to_usb_interface(dev->dev));
|
|
+ if (!gm12u320->dmadev)
|
|
+ drm_warn(dev, "buffer sharing not supported"); /* not an error */
|
|
|
|
INIT_DELAYED_WORK(&gm12u320->fb_update.work, gm12u320_fb_update_work);
|
|
mutex_init(&gm12u320->fb_update.lock);
|
|
|
|
- dev = &gm12u320->dev;
|
|
-
|
|
ret = drmm_mode_config_init(dev);
|
|
if (ret)
|
|
- return ret;
|
|
+ goto err_put_device;
|
|
|
|
dev->mode_config.min_width = GM12U320_USER_WIDTH;
|
|
dev->mode_config.max_width = GM12U320_USER_WIDTH;
|
|
@@ -658,15 +679,15 @@ static int gm12u320_usb_probe(struct usb_interface *interface,
|
|
|
|
ret = gm12u320_usb_alloc(gm12u320);
|
|
if (ret)
|
|
- return ret;
|
|
+ goto err_put_device;
|
|
|
|
ret = gm12u320_set_ecomode(gm12u320);
|
|
if (ret)
|
|
- return ret;
|
|
+ goto err_put_device;
|
|
|
|
ret = gm12u320_conn_init(gm12u320);
|
|
if (ret)
|
|
- return ret;
|
|
+ goto err_put_device;
|
|
|
|
ret = drm_simple_display_pipe_init(&gm12u320->dev,
|
|
&gm12u320->pipe,
|
|
@@ -676,24 +697,31 @@ static int gm12u320_usb_probe(struct usb_interface *interface,
|
|
gm12u320_pipe_modifiers,
|
|
&gm12u320->conn);
|
|
if (ret)
|
|
- return ret;
|
|
+ goto err_put_device;
|
|
|
|
drm_mode_config_reset(dev);
|
|
|
|
usb_set_intfdata(interface, dev);
|
|
ret = drm_dev_register(dev, 0);
|
|
if (ret)
|
|
- return ret;
|
|
+ goto err_put_device;
|
|
|
|
drm_fbdev_generic_setup(dev, 0);
|
|
|
|
return 0;
|
|
+
|
|
+err_put_device:
|
|
+ put_device(gm12u320->dmadev);
|
|
+ return ret;
|
|
}
|
|
|
|
static void gm12u320_usb_disconnect(struct usb_interface *interface)
|
|
{
|
|
struct drm_device *dev = usb_get_intfdata(interface);
|
|
+ struct gm12u320_device *gm12u320 = to_gm12u320(dev);
|
|
|
|
+ put_device(gm12u320->dmadev);
|
|
+ gm12u320->dmadev = NULL;
|
|
drm_dev_unplug(dev);
|
|
drm_atomic_helper_shutdown(dev);
|
|
}
|
|
diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
|
|
index 6e27cb1bf48b2..4eb6efb8b8c02 100644
|
|
--- a/drivers/gpu/drm/ttm/ttm_pool.c
|
|
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
|
|
@@ -268,13 +268,13 @@ static void ttm_pool_type_init(struct ttm_pool_type *pt, struct ttm_pool *pool,
|
|
/* Remove a pool_type from the global shrinker list and free all pages */
|
|
static void ttm_pool_type_fini(struct ttm_pool_type *pt)
|
|
{
|
|
- struct page *p, *tmp;
|
|
+ struct page *p;
|
|
|
|
mutex_lock(&shrinker_lock);
|
|
list_del(&pt->shrinker_list);
|
|
mutex_unlock(&shrinker_lock);
|
|
|
|
- list_for_each_entry_safe(p, tmp, &pt->pages, lru)
|
|
+ while ((p = ttm_pool_type_take(pt)))
|
|
ttm_pool_free_page(pt->pool, pt->caching, pt->order, p);
|
|
}
|
|
|
|
diff --git a/drivers/gpu/drm/udl/udl_drv.c b/drivers/gpu/drm/udl/udl_drv.c
|
|
index 9269092697d8c..5703277c6f527 100644
|
|
--- a/drivers/gpu/drm/udl/udl_drv.c
|
|
+++ b/drivers/gpu/drm/udl/udl_drv.c
|
|
@@ -32,6 +32,22 @@ static int udl_usb_resume(struct usb_interface *interface)
|
|
return drm_mode_config_helper_resume(dev);
|
|
}
|
|
|
|
+/*
|
|
+ * FIXME: Dma-buf sharing requires DMA support by the importing device.
|
|
+ * This function is a workaround to make USB devices work as well.
|
|
+ * See todo.rst for how to fix the issue in the dma-buf framework.
|
|
+ */
|
|
+static struct drm_gem_object *udl_driver_gem_prime_import(struct drm_device *dev,
|
|
+ struct dma_buf *dma_buf)
|
|
+{
|
|
+ struct udl_device *udl = to_udl(dev);
|
|
+
|
|
+ if (!udl->dmadev)
|
|
+ return ERR_PTR(-ENODEV);
|
|
+
|
|
+ return drm_gem_prime_import_dev(dev, dma_buf, udl->dmadev);
|
|
+}
|
|
+
|
|
DEFINE_DRM_GEM_FOPS(udl_driver_fops);
|
|
|
|
static const struct drm_driver driver = {
|
|
@@ -40,6 +56,7 @@ static const struct drm_driver driver = {
|
|
/* GEM hooks */
|
|
.fops = &udl_driver_fops,
|
|
DRM_GEM_SHMEM_DRIVER_OPS,
|
|
+ .gem_prime_import = udl_driver_gem_prime_import,
|
|
|
|
.name = DRIVER_NAME,
|
|
.desc = DRIVER_DESC,
|
|
diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
|
|
index 875e73551ae98..cc16a13316e4e 100644
|
|
--- a/drivers/gpu/drm/udl/udl_drv.h
|
|
+++ b/drivers/gpu/drm/udl/udl_drv.h
|
|
@@ -50,6 +50,7 @@ struct urb_list {
|
|
struct udl_device {
|
|
struct drm_device drm;
|
|
struct device *dev;
|
|
+ struct device *dmadev;
|
|
|
|
struct drm_simple_display_pipe display_pipe;
|
|
|
|
diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c
|
|
index 0e2a376cb0752..853f147036f6b 100644
|
|
--- a/drivers/gpu/drm/udl/udl_main.c
|
|
+++ b/drivers/gpu/drm/udl/udl_main.c
|
|
@@ -315,6 +315,10 @@ int udl_init(struct udl_device *udl)
|
|
|
|
DRM_DEBUG("\n");
|
|
|
|
+ udl->dmadev = usb_intf_get_dma_device(to_usb_interface(dev->dev));
|
|
+ if (!udl->dmadev)
|
|
+ drm_warn(dev, "buffer sharing not supported"); /* not an error */
|
|
+
|
|
mutex_init(&udl->gem_lock);
|
|
|
|
if (!udl_parse_vendor_descriptor(udl)) {
|
|
@@ -343,12 +347,18 @@ int udl_init(struct udl_device *udl)
|
|
err:
|
|
if (udl->urbs.count)
|
|
udl_free_urb_list(dev);
|
|
+ put_device(udl->dmadev);
|
|
DRM_ERROR("%d\n", ret);
|
|
return ret;
|
|
}
|
|
|
|
int udl_drop_usb(struct drm_device *dev)
|
|
{
|
|
+ struct udl_device *udl = to_udl(dev);
|
|
+
|
|
udl_free_urb_list(dev);
|
|
+ put_device(udl->dmadev);
|
|
+ udl->dmadev = NULL;
|
|
+
|
|
return 0;
|
|
}
|
|
diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c
|
|
index fcdc922bc9733..271bd8d243395 100644
|
|
--- a/drivers/hid/hid-logitech-dj.c
|
|
+++ b/drivers/hid/hid-logitech-dj.c
|
|
@@ -995,7 +995,12 @@ static void logi_hidpp_recv_queue_notif(struct hid_device *hdev,
|
|
workitem.reports_supported |= STD_KEYBOARD;
|
|
break;
|
|
case 0x0d:
|
|
- device_type = "eQUAD Lightspeed 1_1";
|
|
+ device_type = "eQUAD Lightspeed 1.1";
|
|
+ logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem);
|
|
+ workitem.reports_supported |= STD_KEYBOARD;
|
|
+ break;
|
|
+ case 0x0f:
|
|
+ device_type = "eQUAD Lightspeed 1.2";
|
|
logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem);
|
|
workitem.reports_supported |= STD_KEYBOARD;
|
|
break;
|
|
diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
|
|
index 217def2d7cb44..ad6630e3cc779 100644
|
|
--- a/drivers/i2c/busses/i2c-rcar.c
|
|
+++ b/drivers/i2c/busses/i2c-rcar.c
|
|
@@ -91,7 +91,6 @@
|
|
|
|
#define RCAR_BUS_PHASE_START (MDBS | MIE | ESG)
|
|
#define RCAR_BUS_PHASE_DATA (MDBS | MIE)
|
|
-#define RCAR_BUS_MASK_DATA (~(ESG | FSB) & 0xFF)
|
|
#define RCAR_BUS_PHASE_STOP (MDBS | MIE | FSB)
|
|
|
|
#define RCAR_IRQ_SEND (MNR | MAL | MST | MAT | MDE)
|
|
@@ -120,6 +119,7 @@ enum rcar_i2c_type {
|
|
};
|
|
|
|
struct rcar_i2c_priv {
|
|
+ u32 flags;
|
|
void __iomem *io;
|
|
struct i2c_adapter adap;
|
|
struct i2c_msg *msg;
|
|
@@ -130,7 +130,6 @@ struct rcar_i2c_priv {
|
|
|
|
int pos;
|
|
u32 icccr;
|
|
- u32 flags;
|
|
u8 recovery_icmcr; /* protected by adapter lock */
|
|
enum rcar_i2c_type devtype;
|
|
struct i2c_client *slave;
|
|
@@ -621,7 +620,7 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
|
|
/*
|
|
* This driver has a lock-free design because there are IP cores (at least
|
|
* R-Car Gen2) which have an inherent race condition in their hardware design.
|
|
- * There, we need to clear RCAR_BUS_MASK_DATA bits as soon as possible after
|
|
+ * There, we need to switch to RCAR_BUS_PHASE_DATA as soon as possible after
|
|
* the interrupt was generated, otherwise an unwanted repeated message gets
|
|
* generated. It turned out that taking a spinlock at the beginning of the ISR
|
|
* was already causing repeated messages. Thus, this driver was converted to
|
|
@@ -630,13 +629,11 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
|
|
static irqreturn_t rcar_i2c_irq(int irq, void *ptr)
|
|
{
|
|
struct rcar_i2c_priv *priv = ptr;
|
|
- u32 msr, val;
|
|
+ u32 msr;
|
|
|
|
/* Clear START or STOP immediately, except for REPSTART after read */
|
|
- if (likely(!(priv->flags & ID_P_REP_AFTER_RD))) {
|
|
- val = rcar_i2c_read(priv, ICMCR);
|
|
- rcar_i2c_write(priv, ICMCR, val & RCAR_BUS_MASK_DATA);
|
|
- }
|
|
+ if (likely(!(priv->flags & ID_P_REP_AFTER_RD)))
|
|
+ rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_DATA);
|
|
|
|
msr = rcar_i2c_read(priv, ICMSR);
|
|
|
|
diff --git a/drivers/input/keyboard/applespi.c b/drivers/input/keyboard/applespi.c
|
|
index d22223154177f..27e87c45edf25 100644
|
|
--- a/drivers/input/keyboard/applespi.c
|
|
+++ b/drivers/input/keyboard/applespi.c
|
|
@@ -48,6 +48,7 @@
|
|
#include <linux/efi.h>
|
|
#include <linux/input.h>
|
|
#include <linux/input/mt.h>
|
|
+#include <linux/ktime.h>
|
|
#include <linux/leds.h>
|
|
#include <linux/module.h>
|
|
#include <linux/spinlock.h>
|
|
@@ -409,7 +410,7 @@ struct applespi_data {
|
|
unsigned int cmd_msg_cntr;
|
|
/* lock to protect the above parameters and flags below */
|
|
spinlock_t cmd_msg_lock;
|
|
- bool cmd_msg_queued;
|
|
+ ktime_t cmd_msg_queued;
|
|
enum applespi_evt_type cmd_evt_type;
|
|
|
|
struct led_classdev backlight_info;
|
|
@@ -729,7 +730,7 @@ static void applespi_msg_complete(struct applespi_data *applespi,
|
|
wake_up_all(&applespi->drain_complete);
|
|
|
|
if (is_write_msg) {
|
|
- applespi->cmd_msg_queued = false;
|
|
+ applespi->cmd_msg_queued = 0;
|
|
applespi_send_cmd_msg(applespi);
|
|
}
|
|
|
|
@@ -771,8 +772,16 @@ static int applespi_send_cmd_msg(struct applespi_data *applespi)
|
|
return 0;
|
|
|
|
/* check whether send is in progress */
|
|
- if (applespi->cmd_msg_queued)
|
|
- return 0;
|
|
+ if (applespi->cmd_msg_queued) {
|
|
+ if (ktime_ms_delta(ktime_get(), applespi->cmd_msg_queued) < 1000)
|
|
+ return 0;
|
|
+
|
|
+ dev_warn(&applespi->spi->dev, "Command %d timed out\n",
|
|
+ applespi->cmd_evt_type);
|
|
+
|
|
+ applespi->cmd_msg_queued = 0;
|
|
+ applespi->write_active = false;
|
|
+ }
|
|
|
|
/* set up packet */
|
|
memset(packet, 0, APPLESPI_PACKET_SIZE);
|
|
@@ -869,7 +878,7 @@ static int applespi_send_cmd_msg(struct applespi_data *applespi)
|
|
return sts;
|
|
}
|
|
|
|
- applespi->cmd_msg_queued = true;
|
|
+ applespi->cmd_msg_queued = ktime_get_coarse();
|
|
applespi->write_active = true;
|
|
|
|
return 0;
|
|
@@ -1921,7 +1930,7 @@ static int __maybe_unused applespi_resume(struct device *dev)
|
|
applespi->drain = false;
|
|
applespi->have_cl_led_on = false;
|
|
applespi->have_bl_level = 0;
|
|
- applespi->cmd_msg_queued = false;
|
|
+ applespi->cmd_msg_queued = 0;
|
|
applespi->read_active = false;
|
|
applespi->write_active = false;
|
|
|
|
diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
|
|
index 83d8ab2aed9f4..01da76dc1caa8 100644
|
|
--- a/drivers/iommu/amd/init.c
|
|
+++ b/drivers/iommu/amd/init.c
|
|
@@ -12,6 +12,7 @@
|
|
#include <linux/acpi.h>
|
|
#include <linux/list.h>
|
|
#include <linux/bitmap.h>
|
|
+#include <linux/delay.h>
|
|
#include <linux/slab.h>
|
|
#include <linux/syscore_ops.h>
|
|
#include <linux/interrupt.h>
|
|
@@ -254,6 +255,8 @@ static enum iommu_init_state init_state = IOMMU_START_STATE;
|
|
static int amd_iommu_enable_interrupts(void);
|
|
static int __init iommu_go_to_state(enum iommu_init_state state);
|
|
static void init_device_table_dma(void);
|
|
+static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
|
|
+ u8 fxn, u64 *value, bool is_write);
|
|
|
|
static bool amd_iommu_pre_enabled = true;
|
|
|
|
@@ -1712,13 +1715,11 @@ static int __init init_iommu_all(struct acpi_table_header *table)
|
|
return 0;
|
|
}
|
|
|
|
-static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
|
|
- u8 fxn, u64 *value, bool is_write);
|
|
-
|
|
-static void init_iommu_perf_ctr(struct amd_iommu *iommu)
|
|
+static void __init init_iommu_perf_ctr(struct amd_iommu *iommu)
|
|
{
|
|
+ int retry;
|
|
struct pci_dev *pdev = iommu->dev;
|
|
- u64 val = 0xabcd, val2 = 0, save_reg = 0;
|
|
+ u64 val = 0xabcd, val2 = 0, save_reg, save_src;
|
|
|
|
if (!iommu_feature(iommu, FEATURE_PC))
|
|
return;
|
|
@@ -1726,17 +1727,39 @@ static void init_iommu_perf_ctr(struct amd_iommu *iommu)
|
|
amd_iommu_pc_present = true;
|
|
|
|
/* save the value to restore, if writable */
|
|
- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false))
|
|
+ if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false) ||
|
|
+ iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, false))
|
|
goto pc_false;
|
|
|
|
- /* Check if the performance counters can be written to */
|
|
- if ((iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true)) ||
|
|
- (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false)) ||
|
|
- (val != val2))
|
|
+ /*
|
|
+ * Disable power gating by programing the performance counter
|
|
+ * source to 20 (i.e. counts the reads and writes from/to IOMMU
|
|
+ * Reserved Register [MMIO Offset 1FF8h] that are ignored.),
|
|
+ * which never get incremented during this init phase.
|
|
+ * (Note: The event is also deprecated.)
|
|
+ */
|
|
+ val = 20;
|
|
+ if (iommu_pc_get_set_reg(iommu, 0, 0, 8, &val, true))
|
|
goto pc_false;
|
|
|
|
+ /* Check if the performance counters can be written to */
|
|
+ val = 0xabcd;
|
|
+ for (retry = 5; retry; retry--) {
|
|
+ if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true) ||
|
|
+ iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false) ||
|
|
+ val2)
|
|
+ break;
|
|
+
|
|
+ /* Wait about 20 msec for power gating to disable and retry. */
|
|
+ msleep(20);
|
|
+ }
|
|
+
|
|
/* restore */
|
|
- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true))
|
|
+ if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true) ||
|
|
+ iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, true))
|
|
+ goto pc_false;
|
|
+
|
|
+ if (val != val2)
|
|
goto pc_false;
|
|
|
|
pci_info(pdev, "IOMMU performance counters supported\n");
|
|
diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
|
|
index 18a9f05df4079..b3bcd6dec93e7 100644
|
|
--- a/drivers/iommu/intel/svm.c
|
|
+++ b/drivers/iommu/intel/svm.c
|
|
@@ -1079,8 +1079,17 @@ prq_advance:
|
|
* Clear the page request overflow bit and wake up all threads that
|
|
* are waiting for the completion of this handling.
|
|
*/
|
|
- if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO)
|
|
- writel(DMA_PRS_PRO, iommu->reg + DMAR_PRS_REG);
|
|
+ if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) {
|
|
+ pr_info_ratelimited("IOMMU: %s: PRQ overflow detected\n",
|
|
+ iommu->name);
|
|
+ head = dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK;
|
|
+ tail = dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK;
|
|
+ if (head == tail) {
|
|
+ writel(DMA_PRS_PRO, iommu->reg + DMAR_PRS_REG);
|
|
+ pr_info_ratelimited("IOMMU: %s: PRQ overflow cleared",
|
|
+ iommu->name);
|
|
+ }
|
|
+ }
|
|
|
|
if (!completion_done(&iommu->prq_complete))
|
|
complete(&iommu->prq_complete);
|
|
diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-params.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-params.c
|
|
index aa5f45749543b..a60c302ef2676 100644
|
|
--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-params.c
|
|
+++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-params.c
|
|
@@ -1288,7 +1288,6 @@ static void rkisp1_params_config_parameter(struct rkisp1_params *params)
|
|
memset(hst.hist_weight, 0x01, sizeof(hst.hist_weight));
|
|
rkisp1_hst_config(params, &hst);
|
|
rkisp1_param_set_bits(params, RKISP1_CIF_ISP_HIST_PROP,
|
|
- ~RKISP1_CIF_ISP_HIST_PROP_MODE_MASK |
|
|
rkisp1_hst_params_default_config.mode);
|
|
|
|
/* set the range */
|
|
diff --git a/drivers/media/platform/vsp1/vsp1_drm.c b/drivers/media/platform/vsp1/vsp1_drm.c
|
|
index 86d5e3f4b1ffc..06f74d410973e 100644
|
|
--- a/drivers/media/platform/vsp1/vsp1_drm.c
|
|
+++ b/drivers/media/platform/vsp1/vsp1_drm.c
|
|
@@ -245,7 +245,7 @@ static int vsp1_du_pipeline_setup_brx(struct vsp1_device *vsp1,
|
|
brx = &vsp1->bru->entity;
|
|
else if (pipe->brx && !drm_pipe->force_brx_release)
|
|
brx = pipe->brx;
|
|
- else if (!vsp1->bru->entity.pipe)
|
|
+ else if (vsp1_feature(vsp1, VSP1_HAS_BRU) && !vsp1->bru->entity.pipe)
|
|
brx = &vsp1->bru->entity;
|
|
else
|
|
brx = &vsp1->brs->entity;
|
|
@@ -462,9 +462,9 @@ static int vsp1_du_pipeline_setup_inputs(struct vsp1_device *vsp1,
|
|
* make sure it is present in the pipeline's list of entities if it
|
|
* wasn't already.
|
|
*/
|
|
- if (!use_uif) {
|
|
+ if (drm_pipe->uif && !use_uif) {
|
|
drm_pipe->uif->pipe = NULL;
|
|
- } else if (!drm_pipe->uif->pipe) {
|
|
+ } else if (drm_pipe->uif && !drm_pipe->uif->pipe) {
|
|
drm_pipe->uif->pipe = pipe;
|
|
list_add_tail(&drm_pipe->uif->list_pipe, &pipe->entities);
|
|
}
|
|
diff --git a/drivers/media/rc/Makefile b/drivers/media/rc/Makefile
|
|
index 5bb2932ab1195..ff6a8fc4c38e5 100644
|
|
--- a/drivers/media/rc/Makefile
|
|
+++ b/drivers/media/rc/Makefile
|
|
@@ -5,6 +5,7 @@ obj-y += keymaps/
|
|
obj-$(CONFIG_RC_CORE) += rc-core.o
|
|
rc-core-y := rc-main.o rc-ir-raw.o
|
|
rc-core-$(CONFIG_LIRC) += lirc_dev.o
|
|
+rc-core-$(CONFIG_MEDIA_CEC_RC) += keymaps/rc-cec.o
|
|
rc-core-$(CONFIG_BPF_LIRC_MODE2) += bpf-lirc.o
|
|
obj-$(CONFIG_IR_NEC_DECODER) += ir-nec-decoder.o
|
|
obj-$(CONFIG_IR_RC5_DECODER) += ir-rc5-decoder.o
|
|
diff --git a/drivers/media/rc/keymaps/Makefile b/drivers/media/rc/keymaps/Makefile
|
|
index b252a1d2ebd66..cc6662e1903f5 100644
|
|
--- a/drivers/media/rc/keymaps/Makefile
|
|
+++ b/drivers/media/rc/keymaps/Makefile
|
|
@@ -21,7 +21,6 @@ obj-$(CONFIG_RC_MAP) += rc-adstech-dvb-t-pci.o \
|
|
rc-behold.o \
|
|
rc-behold-columbus.o \
|
|
rc-budget-ci-old.o \
|
|
- rc-cec.o \
|
|
rc-cinergy-1400.o \
|
|
rc-cinergy.o \
|
|
rc-d680-dmb.o \
|
|
diff --git a/drivers/media/rc/keymaps/rc-cec.c b/drivers/media/rc/keymaps/rc-cec.c
|
|
index 3e3bd11092b45..068e22aeac8c3 100644
|
|
--- a/drivers/media/rc/keymaps/rc-cec.c
|
|
+++ b/drivers/media/rc/keymaps/rc-cec.c
|
|
@@ -1,5 +1,15 @@
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
|
/* Keytable for the CEC remote control
|
|
+ *
|
|
+ * This keymap is unusual in that it can't be built as a module,
|
|
+ * instead it is registered directly in rc-main.c if CONFIG_MEDIA_CEC_RC
|
|
+ * is set. This is because it can be called from drm_dp_cec_set_edid() via
|
|
+ * cec_register_adapter() in an asynchronous context, and it is not
|
|
+ * allowed to use request_module() to load rc-cec.ko in that case.
|
|
+ *
|
|
+ * Since this keymap is only used if CONFIG_MEDIA_CEC_RC is set, we
|
|
+ * just compile this keymap into the rc-core module and never as a
|
|
+ * separate module.
|
|
*
|
|
* Copyright (c) 2015 by Kamil Debski
|
|
*/
|
|
@@ -152,7 +162,7 @@ static struct rc_map_table cec[] = {
|
|
/* 0x77-0xff: Reserved */
|
|
};
|
|
|
|
-static struct rc_map_list cec_map = {
|
|
+struct rc_map_list cec_map = {
|
|
.map = {
|
|
.scan = cec,
|
|
.size = ARRAY_SIZE(cec),
|
|
@@ -160,19 +170,3 @@ static struct rc_map_list cec_map = {
|
|
.name = RC_MAP_CEC,
|
|
}
|
|
};
|
|
-
|
|
-static int __init init_rc_map_cec(void)
|
|
-{
|
|
- return rc_map_register(&cec_map);
|
|
-}
|
|
-
|
|
-static void __exit exit_rc_map_cec(void)
|
|
-{
|
|
- rc_map_unregister(&cec_map);
|
|
-}
|
|
-
|
|
-module_init(init_rc_map_cec);
|
|
-module_exit(exit_rc_map_cec);
|
|
-
|
|
-MODULE_LICENSE("GPL");
|
|
-MODULE_AUTHOR("Kamil Debski");
|
|
diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
|
|
index 1fd62c1dac768..8e88dc8ea6c5e 100644
|
|
--- a/drivers/media/rc/rc-main.c
|
|
+++ b/drivers/media/rc/rc-main.c
|
|
@@ -2069,6 +2069,9 @@ static int __init rc_core_init(void)
|
|
|
|
led_trigger_register_simple("rc-feedback", &led_feedback);
|
|
rc_map_register(&empty_map);
|
|
+#ifdef CONFIG_MEDIA_CEC_RC
|
|
+ rc_map_register(&cec_map);
|
|
+#endif
|
|
|
|
return 0;
|
|
}
|
|
@@ -2078,6 +2081,9 @@ static void __exit rc_core_exit(void)
|
|
lirc_dev_exit();
|
|
class_unregister(&rc_class);
|
|
led_trigger_unregister_simple(led_feedback);
|
|
+#ifdef CONFIG_MEDIA_CEC_RC
|
|
+ rc_map_unregister(&cec_map);
|
|
+#endif
|
|
rc_map_unregister(&empty_map);
|
|
}
|
|
|
|
diff --git a/drivers/media/usb/usbtv/usbtv-audio.c b/drivers/media/usb/usbtv/usbtv-audio.c
|
|
index b57e94fb19770..333bd305a4f9f 100644
|
|
--- a/drivers/media/usb/usbtv/usbtv-audio.c
|
|
+++ b/drivers/media/usb/usbtv/usbtv-audio.c
|
|
@@ -371,7 +371,7 @@ void usbtv_audio_free(struct usbtv *usbtv)
|
|
cancel_work_sync(&usbtv->snd_trigger);
|
|
|
|
if (usbtv->snd && usbtv->udev) {
|
|
- snd_card_free(usbtv->snd);
|
|
+ snd_card_free_when_closed(usbtv->snd);
|
|
usbtv->snd = NULL;
|
|
}
|
|
}
|
|
diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
|
|
index f12e909034ac0..beda610e6b30d 100644
|
|
--- a/drivers/misc/fastrpc.c
|
|
+++ b/drivers/misc/fastrpc.c
|
|
@@ -950,6 +950,11 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel,
|
|
if (!fl->cctx->rpdev)
|
|
return -EPIPE;
|
|
|
|
+ if (handle == FASTRPC_INIT_HANDLE && !kernel) {
|
|
+ dev_warn_ratelimited(fl->sctx->dev, "user app trying to send a kernel RPC message (%d)\n", handle);
|
|
+ return -EPERM;
|
|
+ }
|
|
+
|
|
ctx = fastrpc_context_alloc(fl, kernel, sc, args);
|
|
if (IS_ERR(ctx))
|
|
return PTR_ERR(ctx);
|
|
diff --git a/drivers/misc/pvpanic.c b/drivers/misc/pvpanic.c
|
|
index 41cab297d66e7..2356d621967ef 100644
|
|
--- a/drivers/misc/pvpanic.c
|
|
+++ b/drivers/misc/pvpanic.c
|
|
@@ -92,6 +92,7 @@ static const struct of_device_id pvpanic_mmio_match[] = {
|
|
{ .compatible = "qemu,pvpanic-mmio", },
|
|
{}
|
|
};
|
|
+MODULE_DEVICE_TABLE(of, pvpanic_mmio_match);
|
|
|
|
static const struct acpi_device_id pvpanic_device_ids[] = {
|
|
{ "QEMU0001", 0 },
|
|
diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c
|
|
index c2e70b757dd12..4383c262b3f5a 100644
|
|
--- a/drivers/mmc/core/bus.c
|
|
+++ b/drivers/mmc/core/bus.c
|
|
@@ -399,11 +399,6 @@ void mmc_remove_card(struct mmc_card *card)
|
|
mmc_remove_card_debugfs(card);
|
|
#endif
|
|
|
|
- if (host->cqe_enabled) {
|
|
- host->cqe_ops->cqe_disable(host);
|
|
- host->cqe_enabled = false;
|
|
- }
|
|
-
|
|
if (mmc_card_present(card)) {
|
|
if (mmc_host_is_spi(card->host)) {
|
|
pr_info("%s: SPI card removed\n",
|
|
@@ -416,6 +411,10 @@ void mmc_remove_card(struct mmc_card *card)
|
|
of_node_put(card->dev.of_node);
|
|
}
|
|
|
|
+ if (host->cqe_enabled) {
|
|
+ host->cqe_ops->cqe_disable(host);
|
|
+ host->cqe_enabled = false;
|
|
+ }
|
|
+
|
|
put_device(&card->dev);
|
|
}
|
|
-
|
|
diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
|
|
index ff3063ce2acda..9ce34e8800335 100644
|
|
--- a/drivers/mmc/core/mmc.c
|
|
+++ b/drivers/mmc/core/mmc.c
|
|
@@ -423,10 +423,6 @@ static int mmc_decode_ext_csd(struct mmc_card *card, u8 *ext_csd)
|
|
|
|
/* EXT_CSD value is in units of 10ms, but we store in ms */
|
|
card->ext_csd.part_time = 10 * ext_csd[EXT_CSD_PART_SWITCH_TIME];
|
|
- /* Some eMMC set the value too low so set a minimum */
|
|
- if (card->ext_csd.part_time &&
|
|
- card->ext_csd.part_time < MMC_MIN_PART_SWITCH_TIME)
|
|
- card->ext_csd.part_time = MMC_MIN_PART_SWITCH_TIME;
|
|
|
|
/* Sleep / awake timeout in 100ns units */
|
|
if (sa_shift > 0 && sa_shift <= 0x17)
|
|
@@ -616,6 +612,17 @@ static int mmc_decode_ext_csd(struct mmc_card *card, u8 *ext_csd)
|
|
card->ext_csd.data_sector_size = 512;
|
|
}
|
|
|
|
+ /*
|
|
+ * GENERIC_CMD6_TIME is to be used "unless a specific timeout is defined
|
|
+ * when accessing a specific field", so use it here if there is no
|
|
+ * PARTITION_SWITCH_TIME.
|
|
+ */
|
|
+ if (!card->ext_csd.part_time)
|
|
+ card->ext_csd.part_time = card->ext_csd.generic_cmd6_time;
|
|
+ /* Some eMMC set the value too low so set a minimum */
|
|
+ if (card->ext_csd.part_time < MMC_MIN_PART_SWITCH_TIME)
|
|
+ card->ext_csd.part_time = MMC_MIN_PART_SWITCH_TIME;
|
|
+
|
|
/* eMMC v5 or later */
|
|
if (card->ext_csd.rev >= 7) {
|
|
memcpy(card->ext_csd.fwrev, &ext_csd[EXT_CSD_FIRMWARE_VERSION],
|
|
diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
|
|
index b5a41a7ce1658..9bde0def114b5 100644
|
|
--- a/drivers/mmc/host/mmci.c
|
|
+++ b/drivers/mmc/host/mmci.c
|
|
@@ -1241,7 +1241,11 @@ mmci_start_command(struct mmci_host *host, struct mmc_command *cmd, u32 c)
|
|
if (!cmd->busy_timeout)
|
|
cmd->busy_timeout = 10 * MSEC_PER_SEC;
|
|
|
|
- clks = (unsigned long long)cmd->busy_timeout * host->cclk;
|
|
+ if (cmd->busy_timeout > host->mmc->max_busy_timeout)
|
|
+ clks = (unsigned long long)host->mmc->max_busy_timeout * host->cclk;
|
|
+ else
|
|
+ clks = (unsigned long long)cmd->busy_timeout * host->cclk;
|
|
+
|
|
do_div(clks, MSEC_PER_SEC);
|
|
writel_relaxed(clks, host->base + MMCIDATATIMER);
|
|
}
|
|
@@ -2091,6 +2095,10 @@ static int mmci_probe(struct amba_device *dev,
|
|
mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY;
|
|
}
|
|
|
|
+ /* Variants with mandatory busy timeout in HW needs R1B responses. */
|
|
+ if (variant->busy_timeout)
|
|
+ mmc->caps |= MMC_CAP_NEED_RSP_BUSY;
|
|
+
|
|
/* Prepare a CMD12 - needed to clear the DPSM on some variants. */
|
|
host->stop_abort.opcode = MMC_STOP_TRANSMISSION;
|
|
host->stop_abort.arg = 0;
|
|
diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
|
|
index de09c63475240..898ed1b023df6 100644
|
|
--- a/drivers/mmc/host/mtk-sd.c
|
|
+++ b/drivers/mmc/host/mtk-sd.c
|
|
@@ -1127,13 +1127,13 @@ static void msdc_track_cmd_data(struct msdc_host *host,
|
|
static void msdc_request_done(struct msdc_host *host, struct mmc_request *mrq)
|
|
{
|
|
unsigned long flags;
|
|
- bool ret;
|
|
|
|
- ret = cancel_delayed_work(&host->req_timeout);
|
|
- if (!ret) {
|
|
- /* delay work already running */
|
|
- return;
|
|
- }
|
|
+ /*
|
|
+ * No need check the return value of cancel_delayed_work, as only ONE
|
|
+ * path will go here!
|
|
+ */
|
|
+ cancel_delayed_work(&host->req_timeout);
|
|
+
|
|
spin_lock_irqsave(&host->lock, flags);
|
|
host->mrq = NULL;
|
|
spin_unlock_irqrestore(&host->lock, flags);
|
|
@@ -1155,7 +1155,7 @@ static bool msdc_cmd_done(struct msdc_host *host, int events,
|
|
bool done = false;
|
|
bool sbc_error;
|
|
unsigned long flags;
|
|
- u32 *rsp = cmd->resp;
|
|
+ u32 *rsp;
|
|
|
|
if (mrq->sbc && cmd == mrq->cmd &&
|
|
(events & (MSDC_INT_ACMDRDY | MSDC_INT_ACMDCRCERR
|
|
@@ -1176,6 +1176,7 @@ static bool msdc_cmd_done(struct msdc_host *host, int events,
|
|
|
|
if (done)
|
|
return true;
|
|
+ rsp = cmd->resp;
|
|
|
|
sdr_clr_bits(host->base + MSDC_INTEN, cmd_ints_mask);
|
|
|
|
@@ -1363,7 +1364,7 @@ static void msdc_data_xfer_next(struct msdc_host *host,
|
|
static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
|
|
struct mmc_request *mrq, struct mmc_data *data)
|
|
{
|
|
- struct mmc_command *stop = data->stop;
|
|
+ struct mmc_command *stop;
|
|
unsigned long flags;
|
|
bool done;
|
|
unsigned int check_data = events &
|
|
@@ -1379,6 +1380,7 @@ static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
|
|
|
|
if (done)
|
|
return true;
|
|
+ stop = data->stop;
|
|
|
|
if (check_data || (stop && stop->error)) {
|
|
dev_dbg(host->dev, "DMA status: 0x%8X\n",
|
|
diff --git a/drivers/mmc/host/mxs-mmc.c b/drivers/mmc/host/mxs-mmc.c
|
|
index 56bbc6cd9c848..947581de78601 100644
|
|
--- a/drivers/mmc/host/mxs-mmc.c
|
|
+++ b/drivers/mmc/host/mxs-mmc.c
|
|
@@ -628,7 +628,7 @@ static int mxs_mmc_probe(struct platform_device *pdev)
|
|
|
|
ret = mmc_of_parse(mmc);
|
|
if (ret)
|
|
- goto out_clk_disable;
|
|
+ goto out_free_dma;
|
|
|
|
mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
|
|
|
|
diff --git a/drivers/mmc/host/sdhci-iproc.c b/drivers/mmc/host/sdhci-iproc.c
|
|
index c9434b461aabc..ddeaf8e1f72f9 100644
|
|
--- a/drivers/mmc/host/sdhci-iproc.c
|
|
+++ b/drivers/mmc/host/sdhci-iproc.c
|
|
@@ -296,9 +296,27 @@ static const struct of_device_id sdhci_iproc_of_match[] = {
|
|
MODULE_DEVICE_TABLE(of, sdhci_iproc_of_match);
|
|
|
|
#ifdef CONFIG_ACPI
|
|
+/*
|
|
+ * This is a duplicate of bcm2835_(pltfrm_)data without caps quirks
|
|
+ * which are provided by the ACPI table.
|
|
+ */
|
|
+static const struct sdhci_pltfm_data sdhci_bcm_arasan_data = {
|
|
+ .quirks = SDHCI_QUIRK_BROKEN_CARD_DETECTION |
|
|
+ SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK |
|
|
+ SDHCI_QUIRK_NO_HISPD_BIT,
|
|
+ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
|
|
+ .ops = &sdhci_iproc_32only_ops,
|
|
+};
|
|
+
|
|
+static const struct sdhci_iproc_data bcm_arasan_data = {
|
|
+ .pdata = &sdhci_bcm_arasan_data,
|
|
+};
|
|
+
|
|
static const struct acpi_device_id sdhci_iproc_acpi_ids[] = {
|
|
{ .id = "BRCM5871", .driver_data = (kernel_ulong_t)&iproc_cygnus_data },
|
|
{ .id = "BRCM5872", .driver_data = (kernel_ulong_t)&iproc_data },
|
|
+ { .id = "BCM2847", .driver_data = (kernel_ulong_t)&bcm_arasan_data },
|
|
+ { .id = "BRCME88C", .driver_data = (kernel_ulong_t)&bcm2711_data },
|
|
{ /* sentinel */ }
|
|
};
|
|
MODULE_DEVICE_TABLE(acpi, sdhci_iproc_acpi_ids);
|
|
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
|
|
index 63339d29be905..6d9e90887b29a 100644
|
|
--- a/drivers/net/Kconfig
|
|
+++ b/drivers/net/Kconfig
|
|
@@ -92,7 +92,7 @@ config WIREGUARD
|
|
select CRYPTO_POLY1305_ARM if ARM
|
|
select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON
|
|
select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R2
|
|
- select CRYPTO_POLY1305_MIPS if CPU_MIPS32 || (CPU_MIPS64 && 64BIT)
|
|
+ select CRYPTO_POLY1305_MIPS if MIPS
|
|
help
|
|
WireGuard is a secure, fast, and easy to use replacement for IPSec
|
|
that uses modern cryptography and clever networking tricks. It's
|
|
diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
|
|
index 7ab20a6b0d1db..2893297555eba 100644
|
|
--- a/drivers/net/can/flexcan.c
|
|
+++ b/drivers/net/can/flexcan.c
|
|
@@ -701,7 +701,7 @@ static int flexcan_chip_freeze(struct flexcan_priv *priv)
|
|
u32 reg;
|
|
|
|
reg = priv->read(®s->mcr);
|
|
- reg |= FLEXCAN_MCR_HALT;
|
|
+ reg |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT;
|
|
priv->write(reg, ®s->mcr);
|
|
|
|
while (timeout-- && !(priv->read(®s->mcr) & FLEXCAN_MCR_FRZ_ACK))
|
|
@@ -1479,10 +1479,13 @@ static int flexcan_chip_start(struct net_device *dev)
|
|
|
|
flexcan_set_bittiming(dev);
|
|
|
|
+ /* set freeze, halt */
|
|
+ err = flexcan_chip_freeze(priv);
|
|
+ if (err)
|
|
+ goto out_chip_disable;
|
|
+
|
|
/* MCR
|
|
*
|
|
- * enable freeze
|
|
- * halt now
|
|
* only supervisor access
|
|
* enable warning int
|
|
* enable individual RX masking
|
|
@@ -1491,9 +1494,8 @@ static int flexcan_chip_start(struct net_device *dev)
|
|
*/
|
|
reg_mcr = priv->read(®s->mcr);
|
|
reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff);
|
|
- reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT | FLEXCAN_MCR_SUPV |
|
|
- FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_IRMQ | FLEXCAN_MCR_IDAM_C |
|
|
- FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
|
|
+ reg_mcr |= FLEXCAN_MCR_SUPV | FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_IRMQ |
|
|
+ FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
|
|
|
|
/* MCR
|
|
*
|
|
@@ -1864,10 +1866,14 @@ static int register_flexcandev(struct net_device *dev)
|
|
if (err)
|
|
goto out_chip_disable;
|
|
|
|
- /* set freeze, halt and activate FIFO, restrict register access */
|
|
+ /* set freeze, halt */
|
|
+ err = flexcan_chip_freeze(priv);
|
|
+ if (err)
|
|
+ goto out_chip_disable;
|
|
+
|
|
+ /* activate FIFO, restrict register access */
|
|
reg = priv->read(®s->mcr);
|
|
- reg |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT |
|
|
- FLEXCAN_MCR_FEN | FLEXCAN_MCR_SUPV;
|
|
+ reg |= FLEXCAN_MCR_FEN | FLEXCAN_MCR_SUPV;
|
|
priv->write(reg, ®s->mcr);
|
|
|
|
/* Currently we only support newer versions of this core
|
|
diff --git a/drivers/net/can/m_can/tcan4x5x.c b/drivers/net/can/m_can/tcan4x5x.c
|
|
index 970f0e9d19bfd..4920de09ffb79 100644
|
|
--- a/drivers/net/can/m_can/tcan4x5x.c
|
|
+++ b/drivers/net/can/m_can/tcan4x5x.c
|
|
@@ -326,14 +326,14 @@ static int tcan4x5x_init(struct m_can_classdev *cdev)
|
|
if (ret)
|
|
return ret;
|
|
|
|
+ /* Zero out the MCAN buffers */
|
|
+ m_can_init_ram(cdev);
|
|
+
|
|
ret = regmap_update_bits(tcan4x5x->regmap, TCAN4X5X_CONFIG,
|
|
TCAN4X5X_MODE_SEL_MASK, TCAN4X5X_MODE_NORMAL);
|
|
if (ret)
|
|
return ret;
|
|
|
|
- /* Zero out the MCAN buffers */
|
|
- m_can_init_ram(cdev);
|
|
-
|
|
return ret;
|
|
}
|
|
|
|
diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
|
|
index 4ca0296509936..1a855816cbc9d 100644
|
|
--- a/drivers/net/dsa/sja1105/sja1105_main.c
|
|
+++ b/drivers/net/dsa/sja1105/sja1105_main.c
|
|
@@ -1834,7 +1834,7 @@ out_unlock_ptp:
|
|
speed = SPEED_1000;
|
|
else if (bmcr & BMCR_SPEED100)
|
|
speed = SPEED_100;
|
|
- else if (bmcr & BMCR_SPEED10)
|
|
+ else
|
|
speed = SPEED_10;
|
|
|
|
sja1105_sgmii_pcs_force_speed(priv, speed);
|
|
diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c
|
|
index 9b7f1af5f5747..9e02f88645931 100644
|
|
--- a/drivers/net/ethernet/atheros/alx/main.c
|
|
+++ b/drivers/net/ethernet/atheros/alx/main.c
|
|
@@ -1894,13 +1894,16 @@ static int alx_resume(struct device *dev)
|
|
|
|
if (!netif_running(alx->dev))
|
|
return 0;
|
|
- netif_device_attach(alx->dev);
|
|
|
|
rtnl_lock();
|
|
err = __alx_open(alx, true);
|
|
rtnl_unlock();
|
|
+ if (err)
|
|
+ return err;
|
|
|
|
- return err;
|
|
+ netif_device_attach(alx->dev);
|
|
+
|
|
+ return 0;
|
|
}
|
|
|
|
static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume);
|
|
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
|
|
index 1c96b7ba24f28..80819d8fddb4b 100644
|
|
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
|
|
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
|
|
@@ -8430,10 +8430,18 @@ static void bnxt_setup_inta(struct bnxt *bp)
|
|
bp->irq_tbl[0].handler = bnxt_inta;
|
|
}
|
|
|
|
+static int bnxt_init_int_mode(struct bnxt *bp);
|
|
+
|
|
static int bnxt_setup_int_mode(struct bnxt *bp)
|
|
{
|
|
int rc;
|
|
|
|
+ if (!bp->irq_tbl) {
|
|
+ rc = bnxt_init_int_mode(bp);
|
|
+ if (rc || !bp->irq_tbl)
|
|
+ return rc ?: -ENODEV;
|
|
+ }
|
|
+
|
|
if (bp->flags & BNXT_FLAG_USING_MSIX)
|
|
bnxt_setup_msix(bp);
|
|
else
|
|
@@ -8618,7 +8626,7 @@ static int bnxt_init_inta(struct bnxt *bp)
|
|
|
|
static int bnxt_init_int_mode(struct bnxt *bp)
|
|
{
|
|
- int rc = 0;
|
|
+ int rc = -ENODEV;
|
|
|
|
if (bp->flags & BNXT_FLAG_MSIX_CAP)
|
|
rc = bnxt_init_msix(bp);
|
|
@@ -9339,7 +9347,8 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
|
|
{
|
|
struct hwrm_func_drv_if_change_output *resp = bp->hwrm_cmd_resp_addr;
|
|
struct hwrm_func_drv_if_change_input req = {0};
|
|
- bool resc_reinit = false, fw_reset = false;
|
|
+ bool fw_reset = !bp->irq_tbl;
|
|
+ bool resc_reinit = false;
|
|
u32 flags = 0;
|
|
int rc;
|
|
|
|
@@ -9367,6 +9376,7 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
|
|
|
|
if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state) && !fw_reset) {
|
|
netdev_err(bp->dev, "RESET_DONE not set during FW reset.\n");
|
|
+ set_bit(BNXT_STATE_ABORT_ERR, &bp->state);
|
|
return -ENODEV;
|
|
}
|
|
if (resc_reinit || fw_reset) {
|
|
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
|
|
index 814a5b10141d1..07cdb38e7d118 100644
|
|
--- a/drivers/net/ethernet/cadence/macb_main.c
|
|
+++ b/drivers/net/ethernet/cadence/macb_main.c
|
|
@@ -3950,6 +3950,13 @@ static int macb_init(struct platform_device *pdev)
|
|
return 0;
|
|
}
|
|
|
|
+static const struct macb_usrio_config macb_default_usrio = {
|
|
+ .mii = MACB_BIT(MII),
|
|
+ .rmii = MACB_BIT(RMII),
|
|
+ .rgmii = GEM_BIT(RGMII),
|
|
+ .refclk = MACB_BIT(CLKEN),
|
|
+};
|
|
+
|
|
#if defined(CONFIG_OF)
|
|
/* 1518 rounded up */
|
|
#define AT91ETHER_MAX_RBUFF_SZ 0x600
|
|
@@ -4435,13 +4442,6 @@ static int fu540_c000_init(struct platform_device *pdev)
|
|
return macb_init(pdev);
|
|
}
|
|
|
|
-static const struct macb_usrio_config macb_default_usrio = {
|
|
- .mii = MACB_BIT(MII),
|
|
- .rmii = MACB_BIT(RMII),
|
|
- .rgmii = GEM_BIT(RGMII),
|
|
- .refclk = MACB_BIT(CLKEN),
|
|
-};
|
|
-
|
|
static const struct macb_usrio_config sama7g5_usrio = {
|
|
.mii = 0,
|
|
.rmii = 1,
|
|
@@ -4590,6 +4590,7 @@ static const struct macb_config default_gem_config = {
|
|
.dma_burst_length = 16,
|
|
.clk_init = macb_clk_init,
|
|
.init = macb_init,
|
|
+ .usrio = &macb_default_usrio,
|
|
.jumbo_max_len = 10240,
|
|
};
|
|
|
|
diff --git a/drivers/net/ethernet/davicom/dm9000.c b/drivers/net/ethernet/davicom/dm9000.c
|
|
index 3fdc70dab5c14..a95e95ce94386 100644
|
|
--- a/drivers/net/ethernet/davicom/dm9000.c
|
|
+++ b/drivers/net/ethernet/davicom/dm9000.c
|
|
@@ -133,6 +133,8 @@ struct board_info {
|
|
u32 wake_state;
|
|
|
|
int ip_summed;
|
|
+
|
|
+ struct regulator *power_supply;
|
|
};
|
|
|
|
/* debug code */
|
|
@@ -1449,7 +1451,7 @@ dm9000_probe(struct platform_device *pdev)
|
|
if (ret) {
|
|
dev_err(dev, "failed to request reset gpio %d: %d\n",
|
|
reset_gpios, ret);
|
|
- return -ENODEV;
|
|
+ goto out_regulator_disable;
|
|
}
|
|
|
|
/* According to manual PWRST# Low Period Min 1ms */
|
|
@@ -1461,8 +1463,10 @@ dm9000_probe(struct platform_device *pdev)
|
|
|
|
if (!pdata) {
|
|
pdata = dm9000_parse_dt(&pdev->dev);
|
|
- if (IS_ERR(pdata))
|
|
- return PTR_ERR(pdata);
|
|
+ if (IS_ERR(pdata)) {
|
|
+ ret = PTR_ERR(pdata);
|
|
+ goto out_regulator_disable;
|
|
+ }
|
|
}
|
|
|
|
/* Init network device */
|
|
@@ -1479,6 +1483,8 @@ dm9000_probe(struct platform_device *pdev)
|
|
|
|
db->dev = &pdev->dev;
|
|
db->ndev = ndev;
|
|
+ if (!IS_ERR(power))
|
|
+ db->power_supply = power;
|
|
|
|
spin_lock_init(&db->lock);
|
|
mutex_init(&db->addr_lock);
|
|
@@ -1703,6 +1709,10 @@ out:
|
|
dm9000_release_board(pdev, db);
|
|
free_netdev(ndev);
|
|
|
|
+out_regulator_disable:
|
|
+ if (!IS_ERR(power))
|
|
+ regulator_disable(power);
|
|
+
|
|
return ret;
|
|
}
|
|
|
|
@@ -1760,10 +1770,13 @@ static int
|
|
dm9000_drv_remove(struct platform_device *pdev)
|
|
{
|
|
struct net_device *ndev = platform_get_drvdata(pdev);
|
|
+ struct board_info *dm = to_dm9000_board(ndev);
|
|
|
|
unregister_netdev(ndev);
|
|
- dm9000_release_board(pdev, netdev_priv(ndev));
|
|
+ dm9000_release_board(pdev, dm);
|
|
free_netdev(ndev); /* free device structure */
|
|
+ if (dm->power_supply)
|
|
+ regulator_disable(dm->power_supply);
|
|
|
|
dev_dbg(&pdev->dev, "released and freed device\n");
|
|
return 0;
|
|
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
|
|
index c78d12229730b..09471329f3a36 100644
|
|
--- a/drivers/net/ethernet/freescale/enetc/enetc.c
|
|
+++ b/drivers/net/ethernet/freescale/enetc/enetc.c
|
|
@@ -281,6 +281,8 @@ static int enetc_poll(struct napi_struct *napi, int budget)
|
|
int work_done;
|
|
int i;
|
|
|
|
+ enetc_lock_mdio();
|
|
+
|
|
for (i = 0; i < v->count_tx_rings; i++)
|
|
if (!enetc_clean_tx_ring(&v->tx_ring[i], budget))
|
|
complete = false;
|
|
@@ -291,8 +293,10 @@ static int enetc_poll(struct napi_struct *napi, int budget)
|
|
if (work_done)
|
|
v->rx_napi_work = true;
|
|
|
|
- if (!complete)
|
|
+ if (!complete) {
|
|
+ enetc_unlock_mdio();
|
|
return budget;
|
|
+ }
|
|
|
|
napi_complete_done(napi, work_done);
|
|
|
|
@@ -301,8 +305,6 @@ static int enetc_poll(struct napi_struct *napi, int budget)
|
|
|
|
v->rx_napi_work = false;
|
|
|
|
- enetc_lock_mdio();
|
|
-
|
|
/* enable interrupts */
|
|
enetc_wr_reg_hot(v->rbier, ENETC_RBIER_RXTIE);
|
|
|
|
@@ -327,8 +329,8 @@ static void enetc_get_tx_tstamp(struct enetc_hw *hw, union enetc_tx_bd *txbd,
|
|
{
|
|
u32 lo, hi, tstamp_lo;
|
|
|
|
- lo = enetc_rd(hw, ENETC_SICTR0);
|
|
- hi = enetc_rd(hw, ENETC_SICTR1);
|
|
+ lo = enetc_rd_hot(hw, ENETC_SICTR0);
|
|
+ hi = enetc_rd_hot(hw, ENETC_SICTR1);
|
|
tstamp_lo = le32_to_cpu(txbd->wb.tstamp);
|
|
if (lo <= tstamp_lo)
|
|
hi -= 1;
|
|
@@ -342,6 +344,12 @@ static void enetc_tstamp_tx(struct sk_buff *skb, u64 tstamp)
|
|
if (skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS) {
|
|
memset(&shhwtstamps, 0, sizeof(shhwtstamps));
|
|
shhwtstamps.hwtstamp = ns_to_ktime(tstamp);
|
|
+ /* Ensure skb_mstamp_ns, which might have been populated with
|
|
+ * the txtime, is not mistaken for a software timestamp,
|
|
+ * because this will prevent the dispatch of our hardware
|
|
+ * timestamp to the socket.
|
|
+ */
|
|
+ skb->tstamp = ktime_set(0, 0);
|
|
skb_tstamp_tx(skb, &shhwtstamps);
|
|
}
|
|
}
|
|
@@ -358,9 +366,7 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
|
|
i = tx_ring->next_to_clean;
|
|
tx_swbd = &tx_ring->tx_swbd[i];
|
|
|
|
- enetc_lock_mdio();
|
|
bds_to_clean = enetc_bd_ready_count(tx_ring, i);
|
|
- enetc_unlock_mdio();
|
|
|
|
do_tstamp = false;
|
|
|
|
@@ -403,8 +409,6 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
|
|
tx_swbd = tx_ring->tx_swbd;
|
|
}
|
|
|
|
- enetc_lock_mdio();
|
|
-
|
|
/* BD iteration loop end */
|
|
if (is_eof) {
|
|
tx_frm_cnt++;
|
|
@@ -415,8 +419,6 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
|
|
|
|
if (unlikely(!bds_to_clean))
|
|
bds_to_clean = enetc_bd_ready_count(tx_ring, i);
|
|
-
|
|
- enetc_unlock_mdio();
|
|
}
|
|
|
|
tx_ring->next_to_clean = i;
|
|
@@ -527,9 +529,8 @@ static void enetc_get_rx_tstamp(struct net_device *ndev,
|
|
static void enetc_get_offloads(struct enetc_bdr *rx_ring,
|
|
union enetc_rx_bd *rxbd, struct sk_buff *skb)
|
|
{
|
|
-#ifdef CONFIG_FSL_ENETC_PTP_CLOCK
|
|
struct enetc_ndev_priv *priv = netdev_priv(rx_ring->ndev);
|
|
-#endif
|
|
+
|
|
/* TODO: hashing */
|
|
if (rx_ring->ndev->features & NETIF_F_RXCSUM) {
|
|
u16 inet_csum = le16_to_cpu(rxbd->r.inet_csum);
|
|
@@ -538,12 +539,31 @@ static void enetc_get_offloads(struct enetc_bdr *rx_ring,
|
|
skb->ip_summed = CHECKSUM_COMPLETE;
|
|
}
|
|
|
|
- /* copy VLAN to skb, if one is extracted, for now we assume it's a
|
|
- * standard TPID, but HW also supports custom values
|
|
- */
|
|
- if (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_VLAN)
|
|
- __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
|
|
- le16_to_cpu(rxbd->r.vlan_opt));
|
|
+ if (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_VLAN) {
|
|
+ __be16 tpid = 0;
|
|
+
|
|
+ switch (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_TPID) {
|
|
+ case 0:
|
|
+ tpid = htons(ETH_P_8021Q);
|
|
+ break;
|
|
+ case 1:
|
|
+ tpid = htons(ETH_P_8021AD);
|
|
+ break;
|
|
+ case 2:
|
|
+ tpid = htons(enetc_port_rd(&priv->si->hw,
|
|
+ ENETC_PCVLANR1));
|
|
+ break;
|
|
+ case 3:
|
|
+ tpid = htons(enetc_port_rd(&priv->si->hw,
|
|
+ ENETC_PCVLANR2));
|
|
+ break;
|
|
+ default:
|
|
+ break;
|
|
+ }
|
|
+
|
|
+ __vlan_hwaccel_put_tag(skb, tpid, le16_to_cpu(rxbd->r.vlan_opt));
|
|
+ }
|
|
+
|
|
#ifdef CONFIG_FSL_ENETC_PTP_CLOCK
|
|
if (priv->active_offloads & ENETC_F_RX_TSTAMP)
|
|
enetc_get_rx_tstamp(rx_ring->ndev, rxbd, skb);
|
|
@@ -660,8 +680,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
|
|
u32 bd_status;
|
|
u16 size;
|
|
|
|
- enetc_lock_mdio();
|
|
-
|
|
if (cleaned_cnt >= ENETC_RXBD_BUNDLE) {
|
|
int count = enetc_refill_rx_ring(rx_ring, cleaned_cnt);
|
|
|
|
@@ -672,19 +690,15 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
|
|
|
|
rxbd = enetc_rxbd(rx_ring, i);
|
|
bd_status = le32_to_cpu(rxbd->r.lstatus);
|
|
- if (!bd_status) {
|
|
- enetc_unlock_mdio();
|
|
+ if (!bd_status)
|
|
break;
|
|
- }
|
|
|
|
enetc_wr_reg_hot(rx_ring->idr, BIT(rx_ring->index));
|
|
dma_rmb(); /* for reading other rxbd fields */
|
|
size = le16_to_cpu(rxbd->r.buf_len);
|
|
skb = enetc_map_rx_buff_to_skb(rx_ring, i, size);
|
|
- if (!skb) {
|
|
- enetc_unlock_mdio();
|
|
+ if (!skb)
|
|
break;
|
|
- }
|
|
|
|
enetc_get_offloads(rx_ring, rxbd, skb);
|
|
|
|
@@ -696,7 +710,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
|
|
|
|
if (unlikely(bd_status &
|
|
ENETC_RXBD_LSTATUS(ENETC_RXBD_ERR_MASK))) {
|
|
- enetc_unlock_mdio();
|
|
dev_kfree_skb(skb);
|
|
while (!(bd_status & ENETC_RXBD_LSTATUS_F)) {
|
|
dma_rmb();
|
|
@@ -736,8 +749,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
|
|
|
|
enetc_process_skb(rx_ring, skb);
|
|
|
|
- enetc_unlock_mdio();
|
|
-
|
|
napi_gro_receive(napi, skb);
|
|
|
|
rx_frm_cnt++;
|
|
@@ -984,7 +995,7 @@ static void enetc_free_rxtx_rings(struct enetc_ndev_priv *priv)
|
|
enetc_free_tx_ring(priv->tx_ring[i]);
|
|
}
|
|
|
|
-static int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
|
+int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
|
{
|
|
int size = cbdr->bd_count * sizeof(struct enetc_cbd);
|
|
|
|
@@ -1005,7 +1016,7 @@ static int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
|
return 0;
|
|
}
|
|
|
|
-static void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
|
+void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
|
{
|
|
int size = cbdr->bd_count * sizeof(struct enetc_cbd);
|
|
|
|
@@ -1013,7 +1024,7 @@ static void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
|
cbdr->bd_base = NULL;
|
|
}
|
|
|
|
-static void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
|
|
+void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
|
|
{
|
|
/* set CBDR cache attributes */
|
|
enetc_wr(hw, ENETC_SICAR2,
|
|
@@ -1033,7 +1044,7 @@ static void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
|
|
cbdr->cir = hw->reg + ENETC_SICBDRCIR;
|
|
}
|
|
|
|
-static void enetc_clear_cbdr(struct enetc_hw *hw)
|
|
+void enetc_clear_cbdr(struct enetc_hw *hw)
|
|
{
|
|
enetc_wr(hw, ENETC_SICBDRMR, 0);
|
|
}
|
|
@@ -1058,13 +1069,12 @@ static int enetc_setup_default_rss_table(struct enetc_si *si, int num_groups)
|
|
return 0;
|
|
}
|
|
|
|
-static int enetc_configure_si(struct enetc_ndev_priv *priv)
|
|
+int enetc_configure_si(struct enetc_ndev_priv *priv)
|
|
{
|
|
struct enetc_si *si = priv->si;
|
|
struct enetc_hw *hw = &si->hw;
|
|
int err;
|
|
|
|
- enetc_setup_cbdr(hw, &si->cbd_ring);
|
|
/* set SI cache attributes */
|
|
enetc_wr(hw, ENETC_SICAR0,
|
|
ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT);
|
|
@@ -1112,6 +1122,8 @@ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv)
|
|
if (err)
|
|
return err;
|
|
|
|
+ enetc_setup_cbdr(&si->hw, &si->cbd_ring);
|
|
+
|
|
priv->cls_rules = kcalloc(si->num_fs_entries, sizeof(*priv->cls_rules),
|
|
GFP_KERNEL);
|
|
if (!priv->cls_rules) {
|
|
@@ -1119,14 +1131,8 @@ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv)
|
|
goto err_alloc_cls;
|
|
}
|
|
|
|
- err = enetc_configure_si(priv);
|
|
- if (err)
|
|
- goto err_config_si;
|
|
-
|
|
return 0;
|
|
|
|
-err_config_si:
|
|
- kfree(priv->cls_rules);
|
|
err_alloc_cls:
|
|
enetc_clear_cbdr(&si->hw);
|
|
enetc_free_cbdr(priv->dev, &si->cbd_ring);
|
|
@@ -1212,7 +1218,8 @@ static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
|
|
rx_ring->idr = hw->reg + ENETC_SIRXIDR;
|
|
|
|
enetc_refill_rx_ring(rx_ring, enetc_bd_unused(rx_ring));
|
|
- enetc_wr(hw, ENETC_SIRXIDR, rx_ring->next_to_use);
|
|
+ /* update ENETC's consumer index */
|
|
+ enetc_rxbdr_wr(hw, idx, ENETC_RBCIR, rx_ring->next_to_use);
|
|
|
|
/* enable ring */
|
|
enetc_rxbdr_wr(hw, idx, ENETC_RBMR, rbmr);
|
|
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h
|
|
index 8532d23b54f5f..8b380fc13314a 100644
|
|
--- a/drivers/net/ethernet/freescale/enetc/enetc.h
|
|
+++ b/drivers/net/ethernet/freescale/enetc/enetc.h
|
|
@@ -292,6 +292,7 @@ void enetc_get_si_caps(struct enetc_si *si);
|
|
void enetc_init_si_rings_params(struct enetc_ndev_priv *priv);
|
|
int enetc_alloc_si_resources(struct enetc_ndev_priv *priv);
|
|
void enetc_free_si_resources(struct enetc_ndev_priv *priv);
|
|
+int enetc_configure_si(struct enetc_ndev_priv *priv);
|
|
|
|
int enetc_open(struct net_device *ndev);
|
|
int enetc_close(struct net_device *ndev);
|
|
@@ -309,6 +310,10 @@ int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type,
|
|
void enetc_set_ethtool_ops(struct net_device *ndev);
|
|
|
|
/* control buffer descriptor ring (CBDR) */
|
|
+int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr);
|
|
+void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr);
|
|
+void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr);
|
|
+void enetc_clear_cbdr(struct enetc_hw *hw);
|
|
int enetc_set_mac_flt_entry(struct enetc_si *si, int index,
|
|
char *mac_addr, int si_map);
|
|
int enetc_clear_mac_flt_entry(struct enetc_si *si, int index);
|
|
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_hw.h b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
|
|
index c71fe8d751d50..de0d20b0f489c 100644
|
|
--- a/drivers/net/ethernet/freescale/enetc/enetc_hw.h
|
|
+++ b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
|
|
@@ -172,6 +172,8 @@ enum enetc_bdr_type {TX, RX};
|
|
#define ENETC_PSIPMAR0(n) (0x0100 + (n) * 0x8) /* n = SI index */
|
|
#define ENETC_PSIPMAR1(n) (0x0104 + (n) * 0x8)
|
|
#define ENETC_PVCLCTR 0x0208
|
|
+#define ENETC_PCVLANR1 0x0210
|
|
+#define ENETC_PCVLANR2 0x0214
|
|
#define ENETC_VLAN_TYPE_C BIT(0)
|
|
#define ENETC_VLAN_TYPE_S BIT(1)
|
|
#define ENETC_PVCLCTR_OVTPIDL(bmp) ((bmp) & 0xff) /* VLAN_TYPE */
|
|
@@ -236,10 +238,17 @@ enum enetc_bdr_type {TX, RX};
|
|
#define ENETC_PM_IMDIO_BASE 0x8030
|
|
|
|
#define ENETC_PM0_IF_MODE 0x8300
|
|
-#define ENETC_PMO_IFM_RG BIT(2)
|
|
+#define ENETC_PM0_IFM_RG BIT(2)
|
|
#define ENETC_PM0_IFM_RLP (BIT(5) | BIT(11))
|
|
-#define ENETC_PM0_IFM_RGAUTO (BIT(15) | ENETC_PMO_IFM_RG | BIT(1))
|
|
-#define ENETC_PM0_IFM_XGMII BIT(12)
|
|
+#define ENETC_PM0_IFM_EN_AUTO BIT(15)
|
|
+#define ENETC_PM0_IFM_SSP_MASK GENMASK(14, 13)
|
|
+#define ENETC_PM0_IFM_SSP_1000 (2 << 13)
|
|
+#define ENETC_PM0_IFM_SSP_100 (0 << 13)
|
|
+#define ENETC_PM0_IFM_SSP_10 (1 << 13)
|
|
+#define ENETC_PM0_IFM_FULL_DPX BIT(12)
|
|
+#define ENETC_PM0_IFM_IFMODE_MASK GENMASK(1, 0)
|
|
+#define ENETC_PM0_IFM_IFMODE_XGMII 0
|
|
+#define ENETC_PM0_IFM_IFMODE_GMII 2
|
|
#define ENETC_PSIDCAPR 0x1b08
|
|
#define ENETC_PSIDCAPR_MSK GENMASK(15, 0)
|
|
#define ENETC_PSFCAPR 0x1b18
|
|
@@ -453,6 +462,8 @@ static inline u64 _enetc_rd_reg64_wa(void __iomem *reg)
|
|
#define enetc_wr_reg(reg, val) _enetc_wr_reg_wa((reg), (val))
|
|
#define enetc_rd(hw, off) enetc_rd_reg((hw)->reg + (off))
|
|
#define enetc_wr(hw, off, val) enetc_wr_reg((hw)->reg + (off), val)
|
|
+#define enetc_rd_hot(hw, off) enetc_rd_reg_hot((hw)->reg + (off))
|
|
+#define enetc_wr_hot(hw, off, val) enetc_wr_reg_hot((hw)->reg + (off), val)
|
|
#define enetc_rd64(hw, off) _enetc_rd_reg64_wa((hw)->reg + (off))
|
|
/* port register accessors - PF only */
|
|
#define enetc_port_rd(hw, off) enetc_rd_reg((hw)->port + (off))
|
|
@@ -568,6 +579,7 @@ union enetc_rx_bd {
|
|
#define ENETC_RXBD_LSTATUS(flags) ((flags) << 16)
|
|
#define ENETC_RXBD_FLAG_VLAN BIT(9)
|
|
#define ENETC_RXBD_FLAG_TSTMP BIT(10)
|
|
+#define ENETC_RXBD_FLAG_TPID GENMASK(1, 0)
|
|
|
|
#define ENETC_MAC_ADDR_FILT_CNT 8 /* # of supported entries per port */
|
|
#define EMETC_MAC_ADDR_FILT_RES 3 /* # of reserved entries at the beginning */
|
|
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
|
|
index 515c5b29d7aab..ca02f033bea21 100644
|
|
--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
|
|
+++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
|
|
@@ -190,7 +190,6 @@ static void enetc_pf_set_rx_mode(struct net_device *ndev)
|
|
{
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
struct enetc_pf *pf = enetc_si_priv(priv->si);
|
|
- char vlan_promisc_simap = pf->vlan_promisc_simap;
|
|
struct enetc_hw *hw = &priv->si->hw;
|
|
bool uprom = false, mprom = false;
|
|
struct enetc_mac_filter *filter;
|
|
@@ -203,16 +202,12 @@ static void enetc_pf_set_rx_mode(struct net_device *ndev)
|
|
psipmr = ENETC_PSIPMR_SET_UP(0) | ENETC_PSIPMR_SET_MP(0);
|
|
uprom = true;
|
|
mprom = true;
|
|
- /* Enable VLAN promiscuous mode for SI0 (PF) */
|
|
- vlan_promisc_simap |= BIT(0);
|
|
} else if (ndev->flags & IFF_ALLMULTI) {
|
|
/* enable multi cast promisc mode for SI0 (PF) */
|
|
psipmr = ENETC_PSIPMR_SET_MP(0);
|
|
mprom = true;
|
|
}
|
|
|
|
- enetc_set_vlan_promisc(&pf->si->hw, vlan_promisc_simap);
|
|
-
|
|
/* first 2 filter entries belong to PF */
|
|
if (!uprom) {
|
|
/* Update unicast filters */
|
|
@@ -320,7 +315,7 @@ static void enetc_set_loopback(struct net_device *ndev, bool en)
|
|
u32 reg;
|
|
|
|
reg = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
|
|
- if (reg & ENETC_PMO_IFM_RG) {
|
|
+ if (reg & ENETC_PM0_IFM_RG) {
|
|
/* RGMII mode */
|
|
reg = (reg & ~ENETC_PM0_IFM_RLP) |
|
|
(en ? ENETC_PM0_IFM_RLP : 0);
|
|
@@ -499,13 +494,20 @@ static void enetc_configure_port_mac(struct enetc_hw *hw)
|
|
|
|
static void enetc_mac_config(struct enetc_hw *hw, phy_interface_t phy_mode)
|
|
{
|
|
- /* set auto-speed for RGMII */
|
|
- if (enetc_port_rd(hw, ENETC_PM0_IF_MODE) & ENETC_PMO_IFM_RG ||
|
|
- phy_interface_mode_is_rgmii(phy_mode))
|
|
- enetc_port_wr(hw, ENETC_PM0_IF_MODE, ENETC_PM0_IFM_RGAUTO);
|
|
+ u32 val;
|
|
|
|
- if (phy_mode == PHY_INTERFACE_MODE_USXGMII)
|
|
- enetc_port_wr(hw, ENETC_PM0_IF_MODE, ENETC_PM0_IFM_XGMII);
|
|
+ if (phy_interface_mode_is_rgmii(phy_mode)) {
|
|
+ val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
|
|
+ val &= ~ENETC_PM0_IFM_EN_AUTO;
|
|
+ val &= ENETC_PM0_IFM_IFMODE_MASK;
|
|
+ val |= ENETC_PM0_IFM_IFMODE_GMII | ENETC_PM0_IFM_RG;
|
|
+ enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
|
|
+ }
|
|
+
|
|
+ if (phy_mode == PHY_INTERFACE_MODE_USXGMII) {
|
|
+ val = ENETC_PM0_IFM_FULL_DPX | ENETC_PM0_IFM_IFMODE_XGMII;
|
|
+ enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
|
|
+ }
|
|
}
|
|
|
|
static void enetc_mac_enable(struct enetc_hw *hw, bool en)
|
|
@@ -937,6 +939,34 @@ static void enetc_pl_mac_config(struct phylink_config *config,
|
|
phylink_set_pcs(priv->phylink, &pf->pcs->pcs);
|
|
}
|
|
|
|
+static void enetc_force_rgmii_mac(struct enetc_hw *hw, int speed, int duplex)
|
|
+{
|
|
+ u32 old_val, val;
|
|
+
|
|
+ old_val = val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
|
|
+
|
|
+ if (speed == SPEED_1000) {
|
|
+ val &= ~ENETC_PM0_IFM_SSP_MASK;
|
|
+ val |= ENETC_PM0_IFM_SSP_1000;
|
|
+ } else if (speed == SPEED_100) {
|
|
+ val &= ~ENETC_PM0_IFM_SSP_MASK;
|
|
+ val |= ENETC_PM0_IFM_SSP_100;
|
|
+ } else if (speed == SPEED_10) {
|
|
+ val &= ~ENETC_PM0_IFM_SSP_MASK;
|
|
+ val |= ENETC_PM0_IFM_SSP_10;
|
|
+ }
|
|
+
|
|
+ if (duplex == DUPLEX_FULL)
|
|
+ val |= ENETC_PM0_IFM_FULL_DPX;
|
|
+ else
|
|
+ val &= ~ENETC_PM0_IFM_FULL_DPX;
|
|
+
|
|
+ if (val == old_val)
|
|
+ return;
|
|
+
|
|
+ enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
|
|
+}
|
|
+
|
|
static void enetc_pl_mac_link_up(struct phylink_config *config,
|
|
struct phy_device *phy, unsigned int mode,
|
|
phy_interface_t interface, int speed,
|
|
@@ -949,6 +979,10 @@ static void enetc_pl_mac_link_up(struct phylink_config *config,
|
|
if (priv->active_offloads & ENETC_F_QBV)
|
|
enetc_sched_speed_set(priv, speed);
|
|
|
|
+ if (!phylink_autoneg_inband(mode) &&
|
|
+ phy_interface_mode_is_rgmii(interface))
|
|
+ enetc_force_rgmii_mac(&pf->si->hw, speed, duplex);
|
|
+
|
|
enetc_mac_enable(&pf->si->hw, true);
|
|
}
|
|
|
|
@@ -1041,6 +1075,26 @@ static int enetc_init_port_rss_memory(struct enetc_si *si)
|
|
return err;
|
|
}
|
|
|
|
+static void enetc_init_unused_port(struct enetc_si *si)
|
|
+{
|
|
+ struct device *dev = &si->pdev->dev;
|
|
+ struct enetc_hw *hw = &si->hw;
|
|
+ int err;
|
|
+
|
|
+ si->cbd_ring.bd_count = ENETC_CBDR_DEFAULT_SIZE;
|
|
+ err = enetc_alloc_cbdr(dev, &si->cbd_ring);
|
|
+ if (err)
|
|
+ return;
|
|
+
|
|
+ enetc_setup_cbdr(hw, &si->cbd_ring);
|
|
+
|
|
+ enetc_init_port_rfs_memory(si);
|
|
+ enetc_init_port_rss_memory(si);
|
|
+
|
|
+ enetc_clear_cbdr(hw);
|
|
+ enetc_free_cbdr(dev, &si->cbd_ring);
|
|
+}
|
|
+
|
|
static int enetc_pf_probe(struct pci_dev *pdev,
|
|
const struct pci_device_id *ent)
|
|
{
|
|
@@ -1051,11 +1105,6 @@ static int enetc_pf_probe(struct pci_dev *pdev,
|
|
struct enetc_pf *pf;
|
|
int err;
|
|
|
|
- if (node && !of_device_is_available(node)) {
|
|
- dev_info(&pdev->dev, "device is disabled, skipping\n");
|
|
- return -ENODEV;
|
|
- }
|
|
-
|
|
err = enetc_pci_probe(pdev, KBUILD_MODNAME, sizeof(*pf));
|
|
if (err) {
|
|
dev_err(&pdev->dev, "PCI probing failed\n");
|
|
@@ -1069,6 +1118,13 @@ static int enetc_pf_probe(struct pci_dev *pdev,
|
|
goto err_map_pf_space;
|
|
}
|
|
|
|
+ if (node && !of_device_is_available(node)) {
|
|
+ enetc_init_unused_port(si);
|
|
+ dev_info(&pdev->dev, "device is disabled, skipping\n");
|
|
+ err = -ENODEV;
|
|
+ goto err_device_disabled;
|
|
+ }
|
|
+
|
|
pf = enetc_si_priv(si);
|
|
pf->si = si;
|
|
pf->total_vfs = pci_sriov_get_totalvfs(pdev);
|
|
@@ -1108,6 +1164,12 @@ static int enetc_pf_probe(struct pci_dev *pdev,
|
|
goto err_init_port_rss;
|
|
}
|
|
|
|
+ err = enetc_configure_si(priv);
|
|
+ if (err) {
|
|
+ dev_err(&pdev->dev, "Failed to configure SI\n");
|
|
+ goto err_config_si;
|
|
+ }
|
|
+
|
|
err = enetc_alloc_msix(priv);
|
|
if (err) {
|
|
dev_err(&pdev->dev, "MSIX alloc failed\n");
|
|
@@ -1136,6 +1198,7 @@ err_phylink_create:
|
|
enetc_mdiobus_destroy(pf);
|
|
err_mdiobus_create:
|
|
enetc_free_msix(priv);
|
|
+err_config_si:
|
|
err_init_port_rss:
|
|
err_init_port_rfs:
|
|
err_alloc_msix:
|
|
@@ -1144,6 +1207,7 @@ err_alloc_si_res:
|
|
si->ndev = NULL;
|
|
free_netdev(ndev);
|
|
err_alloc_netdev:
|
|
+err_device_disabled:
|
|
err_map_pf_space:
|
|
enetc_pci_remove(pdev);
|
|
|
|
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_vf.c b/drivers/net/ethernet/freescale/enetc/enetc_vf.c
|
|
index 39c1a09e69a95..9b755a84c2d62 100644
|
|
--- a/drivers/net/ethernet/freescale/enetc/enetc_vf.c
|
|
+++ b/drivers/net/ethernet/freescale/enetc/enetc_vf.c
|
|
@@ -171,6 +171,12 @@ static int enetc_vf_probe(struct pci_dev *pdev,
|
|
goto err_alloc_si_res;
|
|
}
|
|
|
|
+ err = enetc_configure_si(priv);
|
|
+ if (err) {
|
|
+ dev_err(&pdev->dev, "Failed to configure SI\n");
|
|
+ goto err_config_si;
|
|
+ }
|
|
+
|
|
err = enetc_alloc_msix(priv);
|
|
if (err) {
|
|
dev_err(&pdev->dev, "MSIX alloc failed\n");
|
|
@@ -187,6 +193,7 @@ static int enetc_vf_probe(struct pci_dev *pdev,
|
|
|
|
err_reg_netdev:
|
|
enetc_free_msix(priv);
|
|
+err_config_si:
|
|
err_alloc_msix:
|
|
enetc_free_si_resources(priv);
|
|
err_alloc_si_res:
|
|
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
|
|
index edfadb5cb1c34..a731f207b4f14 100644
|
|
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
|
|
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
|
|
@@ -1048,16 +1048,16 @@ struct hclge_fd_tcam_config_3_cmd {
|
|
#define HCLGE_FD_AD_DROP_B 0
|
|
#define HCLGE_FD_AD_DIRECT_QID_B 1
|
|
#define HCLGE_FD_AD_QID_S 2
|
|
-#define HCLGE_FD_AD_QID_M GENMASK(12, 2)
|
|
+#define HCLGE_FD_AD_QID_M GENMASK(11, 2)
|
|
#define HCLGE_FD_AD_USE_COUNTER_B 12
|
|
#define HCLGE_FD_AD_COUNTER_NUM_S 13
|
|
#define HCLGE_FD_AD_COUNTER_NUM_M GENMASK(20, 13)
|
|
#define HCLGE_FD_AD_NXT_STEP_B 20
|
|
#define HCLGE_FD_AD_NXT_KEY_S 21
|
|
-#define HCLGE_FD_AD_NXT_KEY_M GENMASK(26, 21)
|
|
+#define HCLGE_FD_AD_NXT_KEY_M GENMASK(25, 21)
|
|
#define HCLGE_FD_AD_WR_RULE_ID_B 0
|
|
#define HCLGE_FD_AD_RULE_ID_S 1
|
|
-#define HCLGE_FD_AD_RULE_ID_M GENMASK(13, 1)
|
|
+#define HCLGE_FD_AD_RULE_ID_M GENMASK(12, 1)
|
|
#define HCLGE_FD_AD_TC_OVRD_B 16
|
|
#define HCLGE_FD_AD_TC_SIZE_S 17
|
|
#define HCLGE_FD_AD_TC_SIZE_M GENMASK(20, 17)
|
|
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
|
|
index 48549db23c524..67764d9304355 100644
|
|
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
|
|
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
|
|
@@ -5194,9 +5194,9 @@ static bool hclge_fd_convert_tuple(u32 tuple_bit, u8 *key_x, u8 *key_y,
|
|
case BIT(INNER_SRC_MAC):
|
|
for (i = 0; i < ETH_ALEN; i++) {
|
|
calc_x(key_x[ETH_ALEN - 1 - i], rule->tuples.src_mac[i],
|
|
- rule->tuples.src_mac[i]);
|
|
+ rule->tuples_mask.src_mac[i]);
|
|
calc_y(key_y[ETH_ALEN - 1 - i], rule->tuples.src_mac[i],
|
|
- rule->tuples.src_mac[i]);
|
|
+ rule->tuples_mask.src_mac[i]);
|
|
}
|
|
|
|
return true;
|
|
@@ -6283,8 +6283,7 @@ static void hclge_fd_get_ext_info(struct ethtool_rx_flow_spec *fs,
|
|
fs->h_ext.vlan_tci = cpu_to_be16(rule->tuples.vlan_tag1);
|
|
fs->m_ext.vlan_tci =
|
|
rule->unused_tuple & BIT(INNER_VLAN_TAG_FST) ?
|
|
- cpu_to_be16(VLAN_VID_MASK) :
|
|
- cpu_to_be16(rule->tuples_mask.vlan_tag1);
|
|
+ 0 : cpu_to_be16(rule->tuples_mask.vlan_tag1);
|
|
}
|
|
|
|
if (fs->flow_type & FLOW_MAC_EXT) {
|
|
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
|
|
index 13ae7eee7ef5f..3552c4485ed53 100644
|
|
--- a/drivers/net/ethernet/ibm/ibmvnic.c
|
|
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
|
|
@@ -1923,10 +1923,9 @@ static int ibmvnic_set_mac(struct net_device *netdev, void *p)
|
|
if (!is_valid_ether_addr(addr->sa_data))
|
|
return -EADDRNOTAVAIL;
|
|
|
|
- if (adapter->state != VNIC_PROBED) {
|
|
- ether_addr_copy(adapter->mac_addr, addr->sa_data);
|
|
+ ether_addr_copy(adapter->mac_addr, addr->sa_data);
|
|
+ if (adapter->state != VNIC_PROBED)
|
|
rc = __ibmvnic_set_mac(netdev, addr->sa_data);
|
|
- }
|
|
|
|
return rc;
|
|
}
|
|
@@ -5283,16 +5282,14 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter, bool reset)
|
|
{
|
|
struct device *dev = &adapter->vdev->dev;
|
|
unsigned long timeout = msecs_to_jiffies(20000);
|
|
- u64 old_num_rx_queues, old_num_tx_queues;
|
|
+ u64 old_num_rx_queues = adapter->req_rx_queues;
|
|
+ u64 old_num_tx_queues = adapter->req_tx_queues;
|
|
int rc;
|
|
|
|
adapter->from_passive_init = false;
|
|
|
|
- if (reset) {
|
|
- old_num_rx_queues = adapter->req_rx_queues;
|
|
- old_num_tx_queues = adapter->req_tx_queues;
|
|
+ if (reset)
|
|
reinit_completion(&adapter->init_done);
|
|
- }
|
|
|
|
adapter->init_done_rc = 0;
|
|
rc = ibmvnic_send_crq_init(adapter);
|
|
@@ -5477,9 +5474,9 @@ static int ibmvnic_remove(struct vio_dev *dev)
|
|
* after setting state, so __ibmvnic_reset() which is called
|
|
* from the flush_work() below, can make progress.
|
|
*/
|
|
- spin_lock_irqsave(&adapter->rwi_lock, flags);
|
|
+ spin_lock(&adapter->rwi_lock);
|
|
adapter->state = VNIC_REMOVING;
|
|
- spin_unlock_irqrestore(&adapter->rwi_lock, flags);
|
|
+ spin_unlock(&adapter->rwi_lock);
|
|
|
|
spin_unlock_irqrestore(&adapter->state_lock, flags);
|
|
|
|
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
|
|
index fcd6f623f2fd8..4a2d03cada01e 100644
|
|
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
|
|
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
|
|
@@ -15100,6 +15100,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|
if (err) {
|
|
dev_info(&pdev->dev,
|
|
"setup of misc vector failed: %d\n", err);
|
|
+ i40e_cloud_filter_exit(pf);
|
|
+ i40e_fdir_teardown(pf);
|
|
goto err_vsis;
|
|
}
|
|
}
|
|
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
|
|
index eca73526ac86b..54d47265a7ac1 100644
|
|
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
|
|
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
|
|
@@ -575,6 +575,11 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
|
|
return -EINVAL;
|
|
}
|
|
|
|
+ if (xs->props.mode != XFRM_MODE_TRANSPORT) {
|
|
+ netdev_err(dev, "Unsupported mode for ipsec offload\n");
|
|
+ return -EINVAL;
|
|
+ }
|
|
+
|
|
if (ixgbe_ipsec_check_mgmt_ip(xs)) {
|
|
netdev_err(dev, "IPsec IP addr clash with mgmt filters\n");
|
|
return -EINVAL;
|
|
diff --git a/drivers/net/ethernet/intel/ixgbevf/ipsec.c b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
|
|
index 5170dd9d8705b..caaea2c920a6e 100644
|
|
--- a/drivers/net/ethernet/intel/ixgbevf/ipsec.c
|
|
+++ b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
|
|
@@ -272,6 +272,11 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
|
|
return -EINVAL;
|
|
}
|
|
|
|
+ if (xs->props.mode != XFRM_MODE_TRANSPORT) {
|
|
+ netdev_err(dev, "Unsupported mode for ipsec offload\n");
|
|
+ return -EINVAL;
|
|
+ }
|
|
+
|
|
if (xs->xso.flags & XFRM_OFFLOAD_INBOUND) {
|
|
struct rx_sa rsa;
|
|
|
|
diff --git a/drivers/net/ethernet/mediatek/mtk_star_emac.c b/drivers/net/ethernet/mediatek/mtk_star_emac.c
|
|
index a8641a407c06a..96d2891f1675a 100644
|
|
--- a/drivers/net/ethernet/mediatek/mtk_star_emac.c
|
|
+++ b/drivers/net/ethernet/mediatek/mtk_star_emac.c
|
|
@@ -1225,8 +1225,6 @@ static int mtk_star_receive_packet(struct mtk_star_priv *priv)
|
|
goto push_new_skb;
|
|
}
|
|
|
|
- desc_data.dma_addr = new_dma_addr;
|
|
-
|
|
/* We can't fail anymore at this point: it's safe to unmap the skb. */
|
|
mtk_star_dma_unmap_rx(priv, &desc_data);
|
|
|
|
@@ -1236,6 +1234,9 @@ static int mtk_star_receive_packet(struct mtk_star_priv *priv)
|
|
desc_data.skb->dev = ndev;
|
|
netif_receive_skb(desc_data.skb);
|
|
|
|
+ /* update dma_addr for new skb */
|
|
+ desc_data.dma_addr = new_dma_addr;
|
|
+
|
|
push_new_skb:
|
|
desc_data.len = skb_tailroom(new_skb);
|
|
desc_data.skb = new_skb;
|
|
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
|
|
index 23849f2b9c252..1434df66fcf2e 100644
|
|
--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
|
|
+++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
|
|
@@ -47,7 +47,7 @@
|
|
#define EN_ETHTOOL_SHORT_MASK cpu_to_be16(0xffff)
|
|
#define EN_ETHTOOL_WORD_MASK cpu_to_be32(0xffffffff)
|
|
|
|
-static int mlx4_en_moderation_update(struct mlx4_en_priv *priv)
|
|
+int mlx4_en_moderation_update(struct mlx4_en_priv *priv)
|
|
{
|
|
int i, t;
|
|
int err = 0;
|
|
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
|
|
index 32aad4d32b884..c7504223a12a9 100644
|
|
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
|
|
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
|
|
@@ -3558,6 +3558,8 @@ int mlx4_en_reset_config(struct net_device *dev,
|
|
en_err(priv, "Failed starting port\n");
|
|
}
|
|
|
|
+ if (!err)
|
|
+ err = mlx4_en_moderation_update(priv);
|
|
out:
|
|
mutex_unlock(&mdev->state_lock);
|
|
kfree(tmp);
|
|
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
|
|
index e8ed23190de01..f3d1a20201ef3 100644
|
|
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
|
|
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
|
|
@@ -775,6 +775,7 @@ void mlx4_en_ptp_overflow_check(struct mlx4_en_dev *mdev);
|
|
#define DEV_FEATURE_CHANGED(dev, new_features, feature) \
|
|
((dev->features & feature) ^ (new_features & feature))
|
|
|
|
+int mlx4_en_moderation_update(struct mlx4_en_priv *priv);
|
|
int mlx4_en_reset_config(struct net_device *dev,
|
|
struct hwtstamp_config ts_config,
|
|
netdev_features_t new_features);
|
|
diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h
|
|
index 16e2df6ef2f48..c4adc7f740d3e 100644
|
|
--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h
|
|
+++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h
|
|
@@ -4430,6 +4430,7 @@ MLXSW_ITEM32(reg, ptys, ext_eth_proto_cap, 0x08, 0, 32);
|
|
#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_CR4 BIT(20)
|
|
#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_SR4 BIT(21)
|
|
#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4 BIT(22)
|
|
+#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4 BIT(23)
|
|
#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_CR BIT(27)
|
|
#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_KR BIT(28)
|
|
#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_SR BIT(29)
|
|
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
|
|
index 540616469e284..68333ecf6151e 100644
|
|
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
|
|
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
|
|
@@ -1171,6 +1171,11 @@ static const struct mlxsw_sp1_port_link_mode mlxsw_sp1_port_link_mode[] = {
|
|
.mask_ethtool = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT,
|
|
.speed = SPEED_100000,
|
|
},
|
|
+ {
|
|
+ .mask = MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4,
|
|
+ .mask_ethtool = ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT,
|
|
+ .speed = SPEED_100000,
|
|
+ },
|
|
};
|
|
|
|
#define MLXSW_SP1_PORT_LINK_MODE_LEN ARRAY_SIZE(mlxsw_sp1_port_link_mode)
|
|
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
|
|
index 41424ee909a08..23d9fe18adba0 100644
|
|
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
|
|
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
|
|
@@ -5861,6 +5861,10 @@ mlxsw_sp_router_fib4_replace(struct mlxsw_sp *mlxsw_sp,
|
|
if (mlxsw_sp->router->aborted)
|
|
return 0;
|
|
|
|
+ if (fen_info->fi->nh &&
|
|
+ !mlxsw_sp_nexthop_obj_group_lookup(mlxsw_sp, fen_info->fi->nh->id))
|
|
+ return 0;
|
|
+
|
|
fib_node = mlxsw_sp_fib_node_get(mlxsw_sp, fen_info->tb_id,
|
|
&fen_info->dst, sizeof(fen_info->dst),
|
|
fen_info->dst_len,
|
|
@@ -6511,6 +6515,9 @@ static int mlxsw_sp_router_fib6_replace(struct mlxsw_sp *mlxsw_sp,
|
|
if (mlxsw_sp_fib6_rt_should_ignore(rt))
|
|
return 0;
|
|
|
|
+ if (rt->nh && !mlxsw_sp_nexthop_obj_group_lookup(mlxsw_sp, rt->nh->id))
|
|
+ return 0;
|
|
+
|
|
fib_node = mlxsw_sp_fib_node_get(mlxsw_sp, rt->fib6_table->tb6_id,
|
|
&rt->fib6_dst.addr,
|
|
sizeof(rt->fib6_dst.addr),
|
|
diff --git a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
|
|
index 40e2e79d45179..131b2a53d261d 100644
|
|
--- a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
|
|
+++ b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
|
|
@@ -613,7 +613,8 @@ static const struct mlxsw_sx_port_link_mode mlxsw_sx_port_link_mode[] = {
|
|
{
|
|
.mask = MLXSW_REG_PTYS_ETH_SPEED_100GBASE_CR4 |
|
|
MLXSW_REG_PTYS_ETH_SPEED_100GBASE_SR4 |
|
|
- MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4,
|
|
+ MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4 |
|
|
+ MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4,
|
|
.speed = 100000,
|
|
},
|
|
};
|
|
diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c
|
|
index 729495a1a77ee..3655503352928 100644
|
|
--- a/drivers/net/ethernet/mscc/ocelot_flower.c
|
|
+++ b/drivers/net/ethernet/mscc/ocelot_flower.c
|
|
@@ -540,13 +540,14 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
|
|
return -EOPNOTSUPP;
|
|
}
|
|
|
|
+ flow_rule_match_ipv4_addrs(rule, &match);
|
|
+
|
|
if (filter->block_id == VCAP_IS1 && *(u32 *)&match.mask->dst) {
|
|
NL_SET_ERR_MSG_MOD(extack,
|
|
"Key type S1_NORMAL cannot match on destination IP");
|
|
return -EOPNOTSUPP;
|
|
}
|
|
|
|
- flow_rule_match_ipv4_addrs(rule, &match);
|
|
tmp = &filter->key.ipv4.sip.value.addr[0];
|
|
memcpy(tmp, &match.key->src, 4);
|
|
|
|
diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
|
|
index 35b015c9ab025..ea265b428c2f3 100644
|
|
--- a/drivers/net/ethernet/realtek/r8169_main.c
|
|
+++ b/drivers/net/ethernet/realtek/r8169_main.c
|
|
@@ -1013,7 +1013,7 @@ static void r8168fp_adjust_ocp_cmd(struct rtl8169_private *tp, u32 *cmd, int typ
|
|
{
|
|
/* based on RTL8168FP_OOBMAC_BASE in vendor driver */
|
|
if (tp->mac_version == RTL_GIGA_MAC_VER_52 && type == ERIAR_OOB)
|
|
- *cmd |= 0x7f0 << 18;
|
|
+ *cmd |= 0xf70 << 18;
|
|
}
|
|
|
|
DECLARE_RTL_COND(rtl_eriar_cond)
|
|
diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
|
|
index 590b088bc4c7f..f029c7c03804f 100644
|
|
--- a/drivers/net/ethernet/renesas/sh_eth.c
|
|
+++ b/drivers/net/ethernet/renesas/sh_eth.c
|
|
@@ -560,6 +560,8 @@ static struct sh_eth_cpu_data r7s72100_data = {
|
|
EESR_TDE,
|
|
.fdr_value = 0x0000070f,
|
|
|
|
+ .trscer_err_mask = DESC_I_RINT8 | DESC_I_RINT5,
|
|
+
|
|
.no_psr = 1,
|
|
.apr = 1,
|
|
.mpr = 1,
|
|
@@ -780,6 +782,8 @@ static struct sh_eth_cpu_data r7s9210_data = {
|
|
|
|
.fdr_value = 0x0000070f,
|
|
|
|
+ .trscer_err_mask = DESC_I_RINT8 | DESC_I_RINT5,
|
|
+
|
|
.apr = 1,
|
|
.mpr = 1,
|
|
.tpauser = 1,
|
|
@@ -1089,6 +1093,9 @@ static struct sh_eth_cpu_data sh771x_data = {
|
|
EESIPR_CEEFIP | EESIPR_CELFIP |
|
|
EESIPR_RRFIP | EESIPR_RTLFIP | EESIPR_RTSFIP |
|
|
EESIPR_PREIP | EESIPR_CERFIP,
|
|
+
|
|
+ .trscer_err_mask = DESC_I_RINT8,
|
|
+
|
|
.tsu = 1,
|
|
.dual_port = 1,
|
|
};
|
|
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
|
|
index 103d2448e9e0d..a9087dae767de 100644
|
|
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
|
|
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
|
|
@@ -233,6 +233,7 @@ static void common_default_data(struct plat_stmmacenet_data *plat)
|
|
static int intel_mgbe_common_data(struct pci_dev *pdev,
|
|
struct plat_stmmacenet_data *plat)
|
|
{
|
|
+ char clk_name[20];
|
|
int ret;
|
|
int i;
|
|
|
|
@@ -300,8 +301,10 @@ static int intel_mgbe_common_data(struct pci_dev *pdev,
|
|
plat->eee_usecs_rate = plat->clk_ptp_rate;
|
|
|
|
/* Set system clock */
|
|
+ sprintf(clk_name, "%s-%s", "stmmac", pci_name(pdev));
|
|
+
|
|
plat->stmmac_clk = clk_register_fixed_rate(&pdev->dev,
|
|
- "stmmac-clk", NULL, 0,
|
|
+ clk_name, NULL, 0,
|
|
plat->clk_ptp_rate);
|
|
|
|
if (IS_ERR(plat->stmmac_clk)) {
|
|
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
|
|
index c6540b003b430..2ecd3a8a690c2 100644
|
|
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
|
|
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
|
|
@@ -499,10 +499,15 @@ static void dwmac4_get_rx_header_len(struct dma_desc *p, unsigned int *len)
|
|
*len = le32_to_cpu(p->des2) & RDES2_HL;
|
|
}
|
|
|
|
-static void dwmac4_set_sec_addr(struct dma_desc *p, dma_addr_t addr)
|
|
+static void dwmac4_set_sec_addr(struct dma_desc *p, dma_addr_t addr, bool buf2_valid)
|
|
{
|
|
p->des2 = cpu_to_le32(lower_32_bits(addr));
|
|
- p->des3 = cpu_to_le32(upper_32_bits(addr) | RDES3_BUFFER2_VALID_ADDR);
|
|
+ p->des3 = cpu_to_le32(upper_32_bits(addr));
|
|
+
|
|
+ if (buf2_valid)
|
|
+ p->des3 |= cpu_to_le32(RDES3_BUFFER2_VALID_ADDR);
|
|
+ else
|
|
+ p->des3 &= cpu_to_le32(~RDES3_BUFFER2_VALID_ADDR);
|
|
}
|
|
|
|
static void dwmac4_set_tbs(struct dma_edesc *p, u32 sec, u32 nsec)
|
|
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
|
|
index bb29bfcd62c34..62aa0e95beb70 100644
|
|
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
|
|
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
|
|
@@ -124,6 +124,23 @@ static void dwmac4_dma_init_channel(void __iomem *ioaddr,
|
|
ioaddr + DMA_CHAN_INTR_ENA(chan));
|
|
}
|
|
|
|
+static void dwmac410_dma_init_channel(void __iomem *ioaddr,
|
|
+ struct stmmac_dma_cfg *dma_cfg, u32 chan)
|
|
+{
|
|
+ u32 value;
|
|
+
|
|
+ /* common channel control register config */
|
|
+ value = readl(ioaddr + DMA_CHAN_CONTROL(chan));
|
|
+ if (dma_cfg->pblx8)
|
|
+ value = value | DMA_BUS_MODE_PBL;
|
|
+
|
|
+ writel(value, ioaddr + DMA_CHAN_CONTROL(chan));
|
|
+
|
|
+ /* Mask interrupts by writing to CSR7 */
|
|
+ writel(DMA_CHAN_INTR_DEFAULT_MASK_4_10,
|
|
+ ioaddr + DMA_CHAN_INTR_ENA(chan));
|
|
+}
|
|
+
|
|
static void dwmac4_dma_init(void __iomem *ioaddr,
|
|
struct stmmac_dma_cfg *dma_cfg, int atds)
|
|
{
|
|
@@ -523,7 +540,7 @@ const struct stmmac_dma_ops dwmac4_dma_ops = {
|
|
const struct stmmac_dma_ops dwmac410_dma_ops = {
|
|
.reset = dwmac4_dma_reset,
|
|
.init = dwmac4_dma_init,
|
|
- .init_chan = dwmac4_dma_init_channel,
|
|
+ .init_chan = dwmac410_dma_init_channel,
|
|
.init_rx_chan = dwmac4_dma_init_rx_chan,
|
|
.init_tx_chan = dwmac4_dma_init_tx_chan,
|
|
.axi = dwmac4_dma_axi,
|
|
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
|
|
index 0b4ee2dbb691d..71e50751ef2dc 100644
|
|
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
|
|
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
|
|
@@ -53,10 +53,6 @@ void dwmac4_dma_stop_tx(void __iomem *ioaddr, u32 chan)
|
|
|
|
value &= ~DMA_CONTROL_ST;
|
|
writel(value, ioaddr + DMA_CHAN_TX_CONTROL(chan));
|
|
-
|
|
- value = readl(ioaddr + GMAC_CONFIG);
|
|
- value &= ~GMAC_CONFIG_TE;
|
|
- writel(value, ioaddr + GMAC_CONFIG);
|
|
}
|
|
|
|
void dwmac4_dma_start_rx(void __iomem *ioaddr, u32 chan)
|
|
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
|
|
index 0aaf19ab56729..ccfb0102dde49 100644
|
|
--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
|
|
+++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
|
|
@@ -292,7 +292,7 @@ static void dwxgmac2_get_rx_header_len(struct dma_desc *p, unsigned int *len)
|
|
*len = le32_to_cpu(p->des2) & XGMAC_RDES2_HL;
|
|
}
|
|
|
|
-static void dwxgmac2_set_sec_addr(struct dma_desc *p, dma_addr_t addr)
|
|
+static void dwxgmac2_set_sec_addr(struct dma_desc *p, dma_addr_t addr, bool is_valid)
|
|
{
|
|
p->des2 = cpu_to_le32(lower_32_bits(addr));
|
|
p->des3 = cpu_to_le32(upper_32_bits(addr));
|
|
diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h
|
|
index b40b2e0667bba..15d7b82611896 100644
|
|
--- a/drivers/net/ethernet/stmicro/stmmac/hwif.h
|
|
+++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h
|
|
@@ -91,7 +91,7 @@ struct stmmac_desc_ops {
|
|
int (*get_rx_hash)(struct dma_desc *p, u32 *hash,
|
|
enum pkt_hash_types *type);
|
|
void (*get_rx_header_len)(struct dma_desc *p, unsigned int *len);
|
|
- void (*set_sec_addr)(struct dma_desc *p, dma_addr_t addr);
|
|
+ void (*set_sec_addr)(struct dma_desc *p, dma_addr_t addr, bool buf2_valid);
|
|
void (*set_sarc)(struct dma_desc *p, u32 sarc_type);
|
|
void (*set_vlan_tag)(struct dma_desc *p, u16 tag, u16 inner_tag,
|
|
u32 inner_type);
|
|
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
|
|
index 26b971cd4da5a..e87961432a793 100644
|
|
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
|
|
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
|
|
@@ -1303,9 +1303,10 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
|
|
return -ENOMEM;
|
|
|
|
buf->sec_addr = page_pool_get_dma_addr(buf->sec_page);
|
|
- stmmac_set_desc_sec_addr(priv, p, buf->sec_addr);
|
|
+ stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true);
|
|
} else {
|
|
buf->sec_page = NULL;
|
|
+ stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false);
|
|
}
|
|
|
|
buf->addr = page_pool_get_dma_addr(buf->page);
|
|
@@ -3648,7 +3649,10 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
|
|
DMA_FROM_DEVICE);
|
|
|
|
stmmac_set_desc_addr(priv, p, buf->addr);
|
|
- stmmac_set_desc_sec_addr(priv, p, buf->sec_addr);
|
|
+ if (priv->sph)
|
|
+ stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true);
|
|
+ else
|
|
+ stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false);
|
|
stmmac_refill_desc3(priv, rx_q, p);
|
|
|
|
rx_q->rx_count_frames++;
|
|
@@ -5144,13 +5148,16 @@ int stmmac_dvr_remove(struct device *dev)
|
|
netdev_info(priv->dev, "%s: removing driver", __func__);
|
|
|
|
stmmac_stop_all_dma(priv);
|
|
+ stmmac_mac_set(priv, priv->ioaddr, false);
|
|
+ netif_carrier_off(ndev);
|
|
+ unregister_netdev(ndev);
|
|
|
|
+ /* Serdes power down needs to happen after VLAN filter
|
|
+ * is deleted that is triggered by unregister_netdev().
|
|
+ */
|
|
if (priv->plat->serdes_powerdown)
|
|
priv->plat->serdes_powerdown(ndev, priv->plat->bsp_priv);
|
|
|
|
- stmmac_mac_set(priv, priv->ioaddr, false);
|
|
- netif_carrier_off(ndev);
|
|
- unregister_netdev(ndev);
|
|
#ifdef CONFIG_DEBUG_FS
|
|
stmmac_exit_fs(ndev);
|
|
#endif
|
|
@@ -5257,6 +5264,8 @@ static void stmmac_reset_queues_param(struct stmmac_priv *priv)
|
|
tx_q->cur_tx = 0;
|
|
tx_q->dirty_tx = 0;
|
|
tx_q->mss = 0;
|
|
+
|
|
+ netdev_tx_reset_queue(netdev_get_tx_queue(priv->dev, queue));
|
|
}
|
|
}
|
|
|
|
diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
|
|
index 7178468302c8f..ad6dbf0110526 100644
|
|
--- a/drivers/net/netdevsim/netdev.c
|
|
+++ b/drivers/net/netdevsim/netdev.c
|
|
@@ -296,6 +296,7 @@ nsim_create(struct nsim_dev *nsim_dev, struct nsim_dev_port *nsim_dev_port)
|
|
dev_net_set(dev, nsim_dev_net(nsim_dev));
|
|
ns = netdev_priv(dev);
|
|
ns->netdev = dev;
|
|
+ u64_stats_init(&ns->syncp);
|
|
ns->nsim_dev = nsim_dev;
|
|
ns->nsim_dev_port = nsim_dev_port;
|
|
ns->nsim_bus_dev = nsim_dev->nsim_bus_dev;
|
|
diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
|
|
index fff371ca1086c..423952cb9e1cd 100644
|
|
--- a/drivers/net/phy/dp83822.c
|
|
+++ b/drivers/net/phy/dp83822.c
|
|
@@ -290,6 +290,7 @@ static int dp83822_config_intr(struct phy_device *phydev)
|
|
|
|
static irqreturn_t dp83822_handle_interrupt(struct phy_device *phydev)
|
|
{
|
|
+ bool trigger_machine = false;
|
|
int irq_status;
|
|
|
|
/* The MISR1 and MISR2 registers are holding the interrupt status in
|
|
@@ -305,7 +306,7 @@ static irqreturn_t dp83822_handle_interrupt(struct phy_device *phydev)
|
|
return IRQ_NONE;
|
|
}
|
|
if (irq_status & ((irq_status & GENMASK(7, 0)) << 8))
|
|
- goto trigger_machine;
|
|
+ trigger_machine = true;
|
|
|
|
irq_status = phy_read(phydev, MII_DP83822_MISR2);
|
|
if (irq_status < 0) {
|
|
@@ -313,11 +314,11 @@ static irqreturn_t dp83822_handle_interrupt(struct phy_device *phydev)
|
|
return IRQ_NONE;
|
|
}
|
|
if (irq_status & ((irq_status & GENMASK(7, 0)) << 8))
|
|
- goto trigger_machine;
|
|
+ trigger_machine = true;
|
|
|
|
- return IRQ_NONE;
|
|
+ if (!trigger_machine)
|
|
+ return IRQ_NONE;
|
|
|
|
-trigger_machine:
|
|
phy_trigger_machine(phydev);
|
|
|
|
return IRQ_HANDLED;
|
|
diff --git a/drivers/net/phy/dp83tc811.c b/drivers/net/phy/dp83tc811.c
|
|
index 688fadffb249d..7ea32fb77190c 100644
|
|
--- a/drivers/net/phy/dp83tc811.c
|
|
+++ b/drivers/net/phy/dp83tc811.c
|
|
@@ -264,6 +264,7 @@ static int dp83811_config_intr(struct phy_device *phydev)
|
|
|
|
static irqreturn_t dp83811_handle_interrupt(struct phy_device *phydev)
|
|
{
|
|
+ bool trigger_machine = false;
|
|
int irq_status;
|
|
|
|
/* The INT_STAT registers 1, 2 and 3 are holding the interrupt status
|
|
@@ -279,7 +280,7 @@ static irqreturn_t dp83811_handle_interrupt(struct phy_device *phydev)
|
|
return IRQ_NONE;
|
|
}
|
|
if (irq_status & ((irq_status & GENMASK(7, 0)) << 8))
|
|
- goto trigger_machine;
|
|
+ trigger_machine = true;
|
|
|
|
irq_status = phy_read(phydev, MII_DP83811_INT_STAT2);
|
|
if (irq_status < 0) {
|
|
@@ -287,7 +288,7 @@ static irqreturn_t dp83811_handle_interrupt(struct phy_device *phydev)
|
|
return IRQ_NONE;
|
|
}
|
|
if (irq_status & ((irq_status & GENMASK(7, 0)) << 8))
|
|
- goto trigger_machine;
|
|
+ trigger_machine = true;
|
|
|
|
irq_status = phy_read(phydev, MII_DP83811_INT_STAT3);
|
|
if (irq_status < 0) {
|
|
@@ -295,11 +296,11 @@ static irqreturn_t dp83811_handle_interrupt(struct phy_device *phydev)
|
|
return IRQ_NONE;
|
|
}
|
|
if (irq_status & ((irq_status & GENMASK(7, 0)) << 8))
|
|
- goto trigger_machine;
|
|
+ trigger_machine = true;
|
|
|
|
- return IRQ_NONE;
|
|
+ if (!trigger_machine)
|
|
+ return IRQ_NONE;
|
|
|
|
-trigger_machine:
|
|
phy_trigger_machine(phydev);
|
|
|
|
return IRQ_HANDLED;
|
|
diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
|
|
index 45f75533c47ce..b79c4068ee619 100644
|
|
--- a/drivers/net/phy/phy.c
|
|
+++ b/drivers/net/phy/phy.c
|
|
@@ -276,14 +276,16 @@ int phy_ethtool_ksettings_set(struct phy_device *phydev,
|
|
|
|
phydev->autoneg = autoneg;
|
|
|
|
- phydev->speed = speed;
|
|
+ if (autoneg == AUTONEG_DISABLE) {
|
|
+ phydev->speed = speed;
|
|
+ phydev->duplex = duplex;
|
|
+ }
|
|
|
|
linkmode_copy(phydev->advertising, advertising);
|
|
|
|
linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
|
|
phydev->advertising, autoneg == AUTONEG_ENABLE);
|
|
|
|
- phydev->duplex = duplex;
|
|
phydev->master_slave_set = cmd->base.master_slave_cfg;
|
|
phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl;
|
|
|
|
diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
|
|
index 71169e7d6177d..1c6ae845e03f2 100644
|
|
--- a/drivers/net/phy/phy_device.c
|
|
+++ b/drivers/net/phy/phy_device.c
|
|
@@ -230,7 +230,6 @@ static struct phy_driver genphy_driver;
|
|
static LIST_HEAD(phy_fixup_list);
|
|
static DEFINE_MUTEX(phy_fixup_lock);
|
|
|
|
-#ifdef CONFIG_PM
|
|
static bool mdio_bus_phy_may_suspend(struct phy_device *phydev)
|
|
{
|
|
struct device_driver *drv = phydev->mdio.dev.driver;
|
|
@@ -270,7 +269,7 @@ out:
|
|
return !phydev->suspended;
|
|
}
|
|
|
|
-static int mdio_bus_phy_suspend(struct device *dev)
|
|
+static __maybe_unused int mdio_bus_phy_suspend(struct device *dev)
|
|
{
|
|
struct phy_device *phydev = to_phy_device(dev);
|
|
|
|
@@ -290,7 +289,7 @@ static int mdio_bus_phy_suspend(struct device *dev)
|
|
return phy_suspend(phydev);
|
|
}
|
|
|
|
-static int mdio_bus_phy_resume(struct device *dev)
|
|
+static __maybe_unused int mdio_bus_phy_resume(struct device *dev)
|
|
{
|
|
struct phy_device *phydev = to_phy_device(dev);
|
|
int ret;
|
|
@@ -316,7 +315,6 @@ no_resume:
|
|
|
|
static SIMPLE_DEV_PM_OPS(mdio_bus_phy_pm_ops, mdio_bus_phy_suspend,
|
|
mdio_bus_phy_resume);
|
|
-#endif /* CONFIG_PM */
|
|
|
|
/**
|
|
* phy_register_fixup - creates a new phy_fixup and adds it to the list
|
|
diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
|
|
index 7410215e2a2e9..e18ded349d840 100644
|
|
--- a/drivers/net/usb/qmi_wwan.c
|
|
+++ b/drivers/net/usb/qmi_wwan.c
|
|
@@ -396,13 +396,6 @@ static ssize_t add_mux_store(struct device *d, struct device_attribute *attr, c
|
|
goto err;
|
|
}
|
|
|
|
- /* we don't want to modify a running netdev */
|
|
- if (netif_running(dev->net)) {
|
|
- netdev_err(dev->net, "Cannot change a running device\n");
|
|
- ret = -EBUSY;
|
|
- goto err;
|
|
- }
|
|
-
|
|
ret = qmimux_register_device(dev->net, mux_id);
|
|
if (!ret) {
|
|
info->flags |= QMI_WWAN_FLAG_MUX;
|
|
@@ -432,13 +425,6 @@ static ssize_t del_mux_store(struct device *d, struct device_attribute *attr, c
|
|
if (!rtnl_trylock())
|
|
return restart_syscall();
|
|
|
|
- /* we don't want to modify a running netdev */
|
|
- if (netif_running(dev->net)) {
|
|
- netdev_err(dev->net, "Cannot change a running device\n");
|
|
- ret = -EBUSY;
|
|
- goto err;
|
|
- }
|
|
-
|
|
del_dev = qmimux_find_dev(dev, mux_id);
|
|
if (!del_dev) {
|
|
netdev_err(dev->net, "mux_id not present\n");
|
|
diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
|
|
index 605fe555e157d..c3372498f4f15 100644
|
|
--- a/drivers/net/wan/lapbether.c
|
|
+++ b/drivers/net/wan/lapbether.c
|
|
@@ -292,7 +292,6 @@ static int lapbeth_open(struct net_device *dev)
|
|
return -ENODEV;
|
|
}
|
|
|
|
- netif_start_queue(dev);
|
|
return 0;
|
|
}
|
|
|
|
@@ -300,8 +299,6 @@ static int lapbeth_close(struct net_device *dev)
|
|
{
|
|
int err;
|
|
|
|
- netif_stop_queue(dev);
|
|
-
|
|
if ((err = lapb_unregister(dev)) != LAPB_OK)
|
|
pr_err("lapb_unregister error: %d\n", err);
|
|
|
|
diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
|
|
index 7d799fe6fbd89..54bdef33f3f85 100644
|
|
--- a/drivers/net/wireless/ath/ath11k/mac.c
|
|
+++ b/drivers/net/wireless/ath/ath11k/mac.c
|
|
@@ -5299,8 +5299,8 @@ ath11k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
|
|
}
|
|
|
|
if (ab->hw_params.vdev_start_delay &&
|
|
- (arvif->vdev_type == WMI_VDEV_TYPE_AP ||
|
|
- arvif->vdev_type == WMI_VDEV_TYPE_MONITOR)) {
|
|
+ arvif->vdev_type != WMI_VDEV_TYPE_AP &&
|
|
+ arvif->vdev_type != WMI_VDEV_TYPE_MONITOR) {
|
|
param.vdev_id = arvif->vdev_id;
|
|
param.peer_type = WMI_PEER_TYPE_DEFAULT;
|
|
param.peer_addr = ar->mac_addr;
|
|
diff --git a/drivers/net/wireless/ath/ath9k/ath9k.h b/drivers/net/wireless/ath/ath9k/ath9k.h
|
|
index 13b4f5f50f8aa..ef6f5ea06c1f5 100644
|
|
--- a/drivers/net/wireless/ath/ath9k/ath9k.h
|
|
+++ b/drivers/net/wireless/ath/ath9k/ath9k.h
|
|
@@ -177,7 +177,8 @@ struct ath_frame_info {
|
|
s8 txq;
|
|
u8 keyix;
|
|
u8 rtscts_rate;
|
|
- u8 retries : 7;
|
|
+ u8 retries : 6;
|
|
+ u8 dyn_smps : 1;
|
|
u8 baw_tracked : 1;
|
|
u8 tx_power;
|
|
enum ath9k_key_type keytype:2;
|
|
diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
|
|
index e60d4737fc6e4..5691bd6eb82c2 100644
|
|
--- a/drivers/net/wireless/ath/ath9k/xmit.c
|
|
+++ b/drivers/net/wireless/ath/ath9k/xmit.c
|
|
@@ -1271,6 +1271,11 @@ static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,
|
|
is_40, is_sgi, is_sp);
|
|
if (rix < 8 && (tx_info->flags & IEEE80211_TX_CTL_STBC))
|
|
info->rates[i].RateFlags |= ATH9K_RATESERIES_STBC;
|
|
+ if (rix >= 8 && fi->dyn_smps) {
|
|
+ info->rates[i].RateFlags |=
|
|
+ ATH9K_RATESERIES_RTS_CTS;
|
|
+ info->flags |= ATH9K_TXDESC_CTSENA;
|
|
+ }
|
|
|
|
info->txpower[i] = ath_get_rate_txpower(sc, bf, rix,
|
|
is_40, false);
|
|
@@ -2114,6 +2119,7 @@ static void setup_frame_info(struct ieee80211_hw *hw,
|
|
fi->keyix = an->ps_key;
|
|
else
|
|
fi->keyix = ATH9K_TXKEYIX_INVALID;
|
|
+ fi->dyn_smps = sta && sta->smps_mode == IEEE80211_SMPS_DYNAMIC;
|
|
fi->keytype = keytype;
|
|
fi->framelen = framelen;
|
|
fi->tx_power = txpower;
|
|
diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
|
|
index e81dfaf99bcbf..9bf13994c036b 100644
|
|
--- a/drivers/net/wireless/mediatek/mt76/dma.c
|
|
+++ b/drivers/net/wireless/mediatek/mt76/dma.c
|
|
@@ -511,13 +511,13 @@ mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
|
|
{
|
|
struct sk_buff *skb = q->rx_head;
|
|
struct skb_shared_info *shinfo = skb_shinfo(skb);
|
|
+ int nr_frags = shinfo->nr_frags;
|
|
|
|
- if (shinfo->nr_frags < ARRAY_SIZE(shinfo->frags)) {
|
|
+ if (nr_frags < ARRAY_SIZE(shinfo->frags)) {
|
|
struct page *page = virt_to_head_page(data);
|
|
int offset = data - page_address(page) + q->buf_offset;
|
|
|
|
- skb_add_rx_frag(skb, shinfo->nr_frags, page, offset, len,
|
|
- q->buf_size);
|
|
+ skb_add_rx_frag(skb, nr_frags, page, offset, len, q->buf_size);
|
|
} else {
|
|
skb_free_frag(data);
|
|
}
|
|
@@ -526,7 +526,10 @@ mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
|
|
return;
|
|
|
|
q->rx_head = NULL;
|
|
- dev->drv->rx_skb(dev, q - dev->q_rx, skb);
|
|
+ if (nr_frags < ARRAY_SIZE(shinfo->frags))
|
|
+ dev->drv->rx_skb(dev, q - dev->q_rx, skb);
|
|
+ else
|
|
+ dev_kfree_skb(skb);
|
|
}
|
|
|
|
static int
|
|
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
|
|
index 5f36cfa8136c0..7ec6869b3e5b1 100644
|
|
--- a/drivers/nvme/host/fc.c
|
|
+++ b/drivers/nvme/host/fc.c
|
|
@@ -2055,7 +2055,7 @@ done:
|
|
nvme_fc_complete_rq(rq);
|
|
|
|
check_error:
|
|
- if (terminate_assoc)
|
|
+ if (terminate_assoc && ctrl->ctrl.state != NVME_CTRL_RESETTING)
|
|
queue_work(nvme_reset_wq, &ctrl->ioerr_work);
|
|
}
|
|
|
|
diff --git a/drivers/opp/core.c b/drivers/opp/core.c
|
|
index 8c905aabacc01..f4fb43816e595 100644
|
|
--- a/drivers/opp/core.c
|
|
+++ b/drivers/opp/core.c
|
|
@@ -1335,7 +1335,11 @@ static struct dev_pm_opp *_opp_get_next(struct opp_table *opp_table,
|
|
|
|
mutex_lock(&opp_table->lock);
|
|
list_for_each_entry(temp, &opp_table->opp_list, node) {
|
|
- if (dynamic == temp->dynamic) {
|
|
+ /*
|
|
+ * Refcount must be dropped only once for each OPP by OPP core,
|
|
+ * do that with help of "removed" flag.
|
|
+ */
|
|
+ if (!temp->removed && dynamic == temp->dynamic) {
|
|
opp = temp;
|
|
break;
|
|
}
|
|
@@ -1345,10 +1349,27 @@ static struct dev_pm_opp *_opp_get_next(struct opp_table *opp_table,
|
|
return opp;
|
|
}
|
|
|
|
-bool _opp_remove_all_static(struct opp_table *opp_table)
|
|
+/*
|
|
+ * Can't call dev_pm_opp_put() from under the lock as debugfs removal needs to
|
|
+ * happen lock less to avoid circular dependency issues. This routine must be
|
|
+ * called without the opp_table->lock held.
|
|
+ */
|
|
+static void _opp_remove_all(struct opp_table *opp_table, bool dynamic)
|
|
{
|
|
struct dev_pm_opp *opp;
|
|
|
|
+ while ((opp = _opp_get_next(opp_table, dynamic))) {
|
|
+ opp->removed = true;
|
|
+ dev_pm_opp_put(opp);
|
|
+
|
|
+ /* Drop the references taken by dev_pm_opp_add() */
|
|
+ if (dynamic)
|
|
+ dev_pm_opp_put_opp_table(opp_table);
|
|
+ }
|
|
+}
|
|
+
|
|
+bool _opp_remove_all_static(struct opp_table *opp_table)
|
|
+{
|
|
mutex_lock(&opp_table->lock);
|
|
|
|
if (!opp_table->parsed_static_opps) {
|
|
@@ -1363,13 +1384,7 @@ bool _opp_remove_all_static(struct opp_table *opp_table)
|
|
|
|
mutex_unlock(&opp_table->lock);
|
|
|
|
- /*
|
|
- * Can't remove the OPP from under the lock, debugfs removal needs to
|
|
- * happen lock less to avoid circular dependency issues.
|
|
- */
|
|
- while ((opp = _opp_get_next(opp_table, false)))
|
|
- dev_pm_opp_put(opp);
|
|
-
|
|
+ _opp_remove_all(opp_table, false);
|
|
return true;
|
|
}
|
|
|
|
@@ -1382,25 +1397,12 @@ bool _opp_remove_all_static(struct opp_table *opp_table)
|
|
void dev_pm_opp_remove_all_dynamic(struct device *dev)
|
|
{
|
|
struct opp_table *opp_table;
|
|
- struct dev_pm_opp *opp;
|
|
- int count = 0;
|
|
|
|
opp_table = _find_opp_table(dev);
|
|
if (IS_ERR(opp_table))
|
|
return;
|
|
|
|
- /*
|
|
- * Can't remove the OPP from under the lock, debugfs removal needs to
|
|
- * happen lock less to avoid circular dependency issues.
|
|
- */
|
|
- while ((opp = _opp_get_next(opp_table, true))) {
|
|
- dev_pm_opp_put(opp);
|
|
- count++;
|
|
- }
|
|
-
|
|
- /* Drop the references taken by dev_pm_opp_add() */
|
|
- while (count--)
|
|
- dev_pm_opp_put_opp_table(opp_table);
|
|
+ _opp_remove_all(opp_table, true);
|
|
|
|
/* Drop the reference taken by _find_opp_table() */
|
|
dev_pm_opp_put_opp_table(opp_table);
|
|
diff --git a/drivers/opp/opp.h b/drivers/opp/opp.h
|
|
index 4ced7ffa8158e..8dca37662dd83 100644
|
|
--- a/drivers/opp/opp.h
|
|
+++ b/drivers/opp/opp.h
|
|
@@ -56,6 +56,7 @@ extern struct list_head opp_tables;
|
|
* @dynamic: not-created from static DT entries.
|
|
* @turbo: true if turbo (boost) OPP
|
|
* @suspend: true if suspend OPP
|
|
+ * @removed: flag indicating that OPP's reference is dropped by OPP core.
|
|
* @pstate: Device's power domain's performance state.
|
|
* @rate: Frequency in hertz
|
|
* @level: Performance level
|
|
@@ -78,6 +79,7 @@ struct dev_pm_opp {
|
|
bool dynamic;
|
|
bool turbo;
|
|
bool suspend;
|
|
+ bool removed;
|
|
unsigned int pstate;
|
|
unsigned long rate;
|
|
unsigned int level;
|
|
diff --git a/drivers/pci/controller/pci-xgene-msi.c b/drivers/pci/controller/pci-xgene-msi.c
|
|
index 2470782cb01af..1c34c897a7e2a 100644
|
|
--- a/drivers/pci/controller/pci-xgene-msi.c
|
|
+++ b/drivers/pci/controller/pci-xgene-msi.c
|
|
@@ -384,13 +384,9 @@ static int xgene_msi_hwirq_alloc(unsigned int cpu)
|
|
if (!msi_group->gic_irq)
|
|
continue;
|
|
|
|
- irq_set_chained_handler(msi_group->gic_irq,
|
|
- xgene_msi_isr);
|
|
- err = irq_set_handler_data(msi_group->gic_irq, msi_group);
|
|
- if (err) {
|
|
- pr_err("failed to register GIC IRQ handler\n");
|
|
- return -EINVAL;
|
|
- }
|
|
+ irq_set_chained_handler_and_data(msi_group->gic_irq,
|
|
+ xgene_msi_isr, msi_group);
|
|
+
|
|
/*
|
|
* Statically allocate MSI GIC IRQs to each CPU core.
|
|
* With 8-core X-Gene v1, 2 MSI GIC IRQs are allocated
|
|
diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
|
|
index cf4c18f0c25ab..23548b517e4b6 100644
|
|
--- a/drivers/pci/controller/pcie-mediatek.c
|
|
+++ b/drivers/pci/controller/pcie-mediatek.c
|
|
@@ -1035,14 +1035,14 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
|
|
err = of_pci_get_devfn(child);
|
|
if (err < 0) {
|
|
dev_err(dev, "failed to parse devfn: %d\n", err);
|
|
- return err;
|
|
+ goto error_put_node;
|
|
}
|
|
|
|
slot = PCI_SLOT(err);
|
|
|
|
err = mtk_pcie_parse_port(pcie, child, slot);
|
|
if (err)
|
|
- return err;
|
|
+ goto error_put_node;
|
|
}
|
|
|
|
err = mtk_pcie_subsys_powerup(pcie);
|
|
@@ -1058,6 +1058,9 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
|
|
mtk_pcie_subsys_powerdown(pcie);
|
|
|
|
return 0;
|
|
+error_put_node:
|
|
+ of_node_put(child);
|
|
+ return err;
|
|
}
|
|
|
|
static int mtk_pcie_probe(struct platform_device *pdev)
|
|
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
|
|
index ba791165ed194..9449dfde2841e 100644
|
|
--- a/drivers/pci/pci.c
|
|
+++ b/drivers/pci/pci.c
|
|
@@ -4029,6 +4029,10 @@ int pci_register_io_range(struct fwnode_handle *fwnode, phys_addr_t addr,
|
|
ret = logic_pio_register_range(range);
|
|
if (ret)
|
|
kfree(range);
|
|
+
|
|
+ /* Ignore duplicates due to deferred probing */
|
|
+ if (ret == -EEXIST)
|
|
+ ret = 0;
|
|
#endif
|
|
|
|
return ret;
|
|
diff --git a/drivers/pci/pcie/Kconfig b/drivers/pci/pcie/Kconfig
|
|
index 3946555a60422..45a2ef702b45b 100644
|
|
--- a/drivers/pci/pcie/Kconfig
|
|
+++ b/drivers/pci/pcie/Kconfig
|
|
@@ -133,14 +133,6 @@ config PCIE_PTM
|
|
This is only useful if you have devices that support PTM, but it
|
|
is safe to enable even if you don't.
|
|
|
|
-config PCIE_BW
|
|
- bool "PCI Express Bandwidth Change Notification"
|
|
- depends on PCIEPORTBUS
|
|
- help
|
|
- This enables PCI Express Bandwidth Change Notification. If
|
|
- you know link width or rate changes occur only to correct
|
|
- unreliable links, you may answer Y.
|
|
-
|
|
config PCIE_EDR
|
|
bool "PCI Express Error Disconnect Recover support"
|
|
depends on PCIE_DPC && ACPI
|
|
diff --git a/drivers/pci/pcie/Makefile b/drivers/pci/pcie/Makefile
|
|
index d9697892fa3e9..b2980db88cc09 100644
|
|
--- a/drivers/pci/pcie/Makefile
|
|
+++ b/drivers/pci/pcie/Makefile
|
|
@@ -12,5 +12,4 @@ obj-$(CONFIG_PCIEAER_INJECT) += aer_inject.o
|
|
obj-$(CONFIG_PCIE_PME) += pme.o
|
|
obj-$(CONFIG_PCIE_DPC) += dpc.o
|
|
obj-$(CONFIG_PCIE_PTM) += ptm.o
|
|
-obj-$(CONFIG_PCIE_BW) += bw_notification.o
|
|
obj-$(CONFIG_PCIE_EDR) += edr.o
|
|
diff --git a/drivers/pci/pcie/bw_notification.c b/drivers/pci/pcie/bw_notification.c
|
|
deleted file mode 100644
|
|
index 565d23cccb8b5..0000000000000
|
|
--- a/drivers/pci/pcie/bw_notification.c
|
|
+++ /dev/null
|
|
@@ -1,138 +0,0 @@
|
|
-// SPDX-License-Identifier: GPL-2.0+
|
|
-/*
|
|
- * PCI Express Link Bandwidth Notification services driver
|
|
- * Author: Alexandru Gagniuc <mr.nuke.me@gmail.com>
|
|
- *
|
|
- * Copyright (C) 2019, Dell Inc
|
|
- *
|
|
- * The PCIe Link Bandwidth Notification provides a way to notify the
|
|
- * operating system when the link width or data rate changes. This
|
|
- * capability is required for all root ports and downstream ports
|
|
- * supporting links wider than x1 and/or multiple link speeds.
|
|
- *
|
|
- * This service port driver hooks into the bandwidth notification interrupt
|
|
- * and warns when links become degraded in operation.
|
|
- */
|
|
-
|
|
-#define dev_fmt(fmt) "bw_notification: " fmt
|
|
-
|
|
-#include "../pci.h"
|
|
-#include "portdrv.h"
|
|
-
|
|
-static bool pcie_link_bandwidth_notification_supported(struct pci_dev *dev)
|
|
-{
|
|
- int ret;
|
|
- u32 lnk_cap;
|
|
-
|
|
- ret = pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnk_cap);
|
|
- return (ret == PCIBIOS_SUCCESSFUL) && (lnk_cap & PCI_EXP_LNKCAP_LBNC);
|
|
-}
|
|
-
|
|
-static void pcie_enable_link_bandwidth_notification(struct pci_dev *dev)
|
|
-{
|
|
- u16 lnk_ctl;
|
|
-
|
|
- pcie_capability_write_word(dev, PCI_EXP_LNKSTA, PCI_EXP_LNKSTA_LBMS);
|
|
-
|
|
- pcie_capability_read_word(dev, PCI_EXP_LNKCTL, &lnk_ctl);
|
|
- lnk_ctl |= PCI_EXP_LNKCTL_LBMIE;
|
|
- pcie_capability_write_word(dev, PCI_EXP_LNKCTL, lnk_ctl);
|
|
-}
|
|
-
|
|
-static void pcie_disable_link_bandwidth_notification(struct pci_dev *dev)
|
|
-{
|
|
- u16 lnk_ctl;
|
|
-
|
|
- pcie_capability_read_word(dev, PCI_EXP_LNKCTL, &lnk_ctl);
|
|
- lnk_ctl &= ~PCI_EXP_LNKCTL_LBMIE;
|
|
- pcie_capability_write_word(dev, PCI_EXP_LNKCTL, lnk_ctl);
|
|
-}
|
|
-
|
|
-static irqreturn_t pcie_bw_notification_irq(int irq, void *context)
|
|
-{
|
|
- struct pcie_device *srv = context;
|
|
- struct pci_dev *port = srv->port;
|
|
- u16 link_status, events;
|
|
- int ret;
|
|
-
|
|
- ret = pcie_capability_read_word(port, PCI_EXP_LNKSTA, &link_status);
|
|
- events = link_status & PCI_EXP_LNKSTA_LBMS;
|
|
-
|
|
- if (ret != PCIBIOS_SUCCESSFUL || !events)
|
|
- return IRQ_NONE;
|
|
-
|
|
- pcie_capability_write_word(port, PCI_EXP_LNKSTA, events);
|
|
- pcie_update_link_speed(port->subordinate, link_status);
|
|
- return IRQ_WAKE_THREAD;
|
|
-}
|
|
-
|
|
-static irqreturn_t pcie_bw_notification_handler(int irq, void *context)
|
|
-{
|
|
- struct pcie_device *srv = context;
|
|
- struct pci_dev *port = srv->port;
|
|
- struct pci_dev *dev;
|
|
-
|
|
- /*
|
|
- * Print status from downstream devices, not this root port or
|
|
- * downstream switch port.
|
|
- */
|
|
- down_read(&pci_bus_sem);
|
|
- list_for_each_entry(dev, &port->subordinate->devices, bus_list)
|
|
- pcie_report_downtraining(dev);
|
|
- up_read(&pci_bus_sem);
|
|
-
|
|
- return IRQ_HANDLED;
|
|
-}
|
|
-
|
|
-static int pcie_bandwidth_notification_probe(struct pcie_device *srv)
|
|
-{
|
|
- int ret;
|
|
-
|
|
- /* Single-width or single-speed ports do not have to support this. */
|
|
- if (!pcie_link_bandwidth_notification_supported(srv->port))
|
|
- return -ENODEV;
|
|
-
|
|
- ret = request_threaded_irq(srv->irq, pcie_bw_notification_irq,
|
|
- pcie_bw_notification_handler,
|
|
- IRQF_SHARED, "PCIe BW notif", srv);
|
|
- if (ret)
|
|
- return ret;
|
|
-
|
|
- pcie_enable_link_bandwidth_notification(srv->port);
|
|
- pci_info(srv->port, "enabled with IRQ %d\n", srv->irq);
|
|
-
|
|
- return 0;
|
|
-}
|
|
-
|
|
-static void pcie_bandwidth_notification_remove(struct pcie_device *srv)
|
|
-{
|
|
- pcie_disable_link_bandwidth_notification(srv->port);
|
|
- free_irq(srv->irq, srv);
|
|
-}
|
|
-
|
|
-static int pcie_bandwidth_notification_suspend(struct pcie_device *srv)
|
|
-{
|
|
- pcie_disable_link_bandwidth_notification(srv->port);
|
|
- return 0;
|
|
-}
|
|
-
|
|
-static int pcie_bandwidth_notification_resume(struct pcie_device *srv)
|
|
-{
|
|
- pcie_enable_link_bandwidth_notification(srv->port);
|
|
- return 0;
|
|
-}
|
|
-
|
|
-static struct pcie_port_service_driver pcie_bandwidth_notification_driver = {
|
|
- .name = "pcie_bw_notification",
|
|
- .port_type = PCIE_ANY_PORT,
|
|
- .service = PCIE_PORT_SERVICE_BWNOTIF,
|
|
- .probe = pcie_bandwidth_notification_probe,
|
|
- .suspend = pcie_bandwidth_notification_suspend,
|
|
- .resume = pcie_bandwidth_notification_resume,
|
|
- .remove = pcie_bandwidth_notification_remove,
|
|
-};
|
|
-
|
|
-int __init pcie_bandwidth_notification_init(void)
|
|
-{
|
|
- return pcie_port_service_register(&pcie_bandwidth_notification_driver);
|
|
-}
|
|
diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
|
|
index 510f31f0ef6d0..4798bd6de97d5 100644
|
|
--- a/drivers/pci/pcie/err.c
|
|
+++ b/drivers/pci/pcie/err.c
|
|
@@ -198,8 +198,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
|
|
pci_dbg(bridge, "broadcast error_detected message\n");
|
|
if (state == pci_channel_io_frozen) {
|
|
pci_walk_bridge(bridge, report_frozen_detected, &status);
|
|
- status = reset_subordinates(bridge);
|
|
- if (status != PCI_ERS_RESULT_RECOVERED) {
|
|
+ if (reset_subordinates(bridge) != PCI_ERS_RESULT_RECOVERED) {
|
|
pci_warn(bridge, "subordinate device reset failed\n");
|
|
goto failed;
|
|
}
|
|
diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h
|
|
index af7cf237432ac..2ff5724b8f13f 100644
|
|
--- a/drivers/pci/pcie/portdrv.h
|
|
+++ b/drivers/pci/pcie/portdrv.h
|
|
@@ -53,12 +53,6 @@ int pcie_dpc_init(void);
|
|
static inline int pcie_dpc_init(void) { return 0; }
|
|
#endif
|
|
|
|
-#ifdef CONFIG_PCIE_BW
|
|
-int pcie_bandwidth_notification_init(void);
|
|
-#else
|
|
-static inline int pcie_bandwidth_notification_init(void) { return 0; }
|
|
-#endif
|
|
-
|
|
/* Port Type */
|
|
#define PCIE_ANY_PORT (~0)
|
|
|
|
diff --git a/drivers/pci/pcie/portdrv_pci.c b/drivers/pci/pcie/portdrv_pci.c
|
|
index 0b250bc5f4050..8bd4992a4f328 100644
|
|
--- a/drivers/pci/pcie/portdrv_pci.c
|
|
+++ b/drivers/pci/pcie/portdrv_pci.c
|
|
@@ -255,7 +255,6 @@ static void __init pcie_init_services(void)
|
|
pcie_pme_init();
|
|
pcie_dpc_init();
|
|
pcie_hp_init();
|
|
- pcie_bandwidth_notification_init();
|
|
}
|
|
|
|
static int __init pcie_portdrv_init(void)
|
|
diff --git a/drivers/perf/arm_dmc620_pmu.c b/drivers/perf/arm_dmc620_pmu.c
|
|
index 004930eb4bbb6..b50b47f1a0d92 100644
|
|
--- a/drivers/perf/arm_dmc620_pmu.c
|
|
+++ b/drivers/perf/arm_dmc620_pmu.c
|
|
@@ -681,6 +681,7 @@ static int dmc620_pmu_device_probe(struct platform_device *pdev)
|
|
if (!name) {
|
|
dev_err(&pdev->dev,
|
|
"Create name failed, PMU @%pa\n", &res->start);
|
|
+ ret = -ENOMEM;
|
|
goto out_teardown_dev;
|
|
}
|
|
|
|
diff --git a/drivers/platform/olpc/olpc-ec.c b/drivers/platform/olpc/olpc-ec.c
|
|
index f64b82824db28..2db7113383fdc 100644
|
|
--- a/drivers/platform/olpc/olpc-ec.c
|
|
+++ b/drivers/platform/olpc/olpc-ec.c
|
|
@@ -426,11 +426,8 @@ static int olpc_ec_probe(struct platform_device *pdev)
|
|
|
|
/* get the EC revision */
|
|
err = olpc_ec_cmd(EC_FIRMWARE_REV, NULL, 0, &ec->version, 1);
|
|
- if (err) {
|
|
- ec_priv = NULL;
|
|
- kfree(ec);
|
|
- return err;
|
|
- }
|
|
+ if (err)
|
|
+ goto error;
|
|
|
|
config.dev = pdev->dev.parent;
|
|
config.driver_data = ec;
|
|
@@ -440,12 +437,16 @@ static int olpc_ec_probe(struct platform_device *pdev)
|
|
if (IS_ERR(ec->dcon_rdev)) {
|
|
dev_err(&pdev->dev, "failed to register DCON regulator\n");
|
|
err = PTR_ERR(ec->dcon_rdev);
|
|
- kfree(ec);
|
|
- return err;
|
|
+ goto error;
|
|
}
|
|
|
|
ec->dbgfs_dir = olpc_ec_setup_debugfs();
|
|
|
|
+ return 0;
|
|
+
|
|
+error:
|
|
+ ec_priv = NULL;
|
|
+ kfree(ec);
|
|
return err;
|
|
}
|
|
|
|
diff --git a/drivers/platform/x86/amd-pmc.c b/drivers/platform/x86/amd-pmc.c
|
|
index ef83425724634..b9da58ee9b1e3 100644
|
|
--- a/drivers/platform/x86/amd-pmc.c
|
|
+++ b/drivers/platform/x86/amd-pmc.c
|
|
@@ -210,31 +210,39 @@ static int amd_pmc_probe(struct platform_device *pdev)
|
|
dev->dev = &pdev->dev;
|
|
|
|
rdev = pci_get_domain_bus_and_slot(0, 0, PCI_DEVFN(0, 0));
|
|
- if (!rdev || !pci_match_id(pmc_pci_ids, rdev))
|
|
+ if (!rdev || !pci_match_id(pmc_pci_ids, rdev)) {
|
|
+ pci_dev_put(rdev);
|
|
return -ENODEV;
|
|
+ }
|
|
|
|
dev->cpu_id = rdev->device;
|
|
err = pci_write_config_dword(rdev, AMD_PMC_SMU_INDEX_ADDRESS, AMD_PMC_BASE_ADDR_LO);
|
|
if (err) {
|
|
dev_err(dev->dev, "error writing to 0x%x\n", AMD_PMC_SMU_INDEX_ADDRESS);
|
|
+ pci_dev_put(rdev);
|
|
return pcibios_err_to_errno(err);
|
|
}
|
|
|
|
err = pci_read_config_dword(rdev, AMD_PMC_SMU_INDEX_DATA, &val);
|
|
- if (err)
|
|
+ if (err) {
|
|
+ pci_dev_put(rdev);
|
|
return pcibios_err_to_errno(err);
|
|
+ }
|
|
|
|
base_addr_lo = val & AMD_PMC_BASE_ADDR_HI_MASK;
|
|
|
|
err = pci_write_config_dword(rdev, AMD_PMC_SMU_INDEX_ADDRESS, AMD_PMC_BASE_ADDR_HI);
|
|
if (err) {
|
|
dev_err(dev->dev, "error writing to 0x%x\n", AMD_PMC_SMU_INDEX_ADDRESS);
|
|
+ pci_dev_put(rdev);
|
|
return pcibios_err_to_errno(err);
|
|
}
|
|
|
|
err = pci_read_config_dword(rdev, AMD_PMC_SMU_INDEX_DATA, &val);
|
|
- if (err)
|
|
+ if (err) {
|
|
+ pci_dev_put(rdev);
|
|
return pcibios_err_to_errno(err);
|
|
+ }
|
|
|
|
base_addr_hi = val & AMD_PMC_BASE_ADDR_LO_MASK;
|
|
pci_dev_put(rdev);
|
|
diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
|
|
index c7eb9a10c680d..3101eab0adddb 100644
|
|
--- a/drivers/s390/block/dasd.c
|
|
+++ b/drivers/s390/block/dasd.c
|
|
@@ -3068,7 +3068,8 @@ static blk_status_t do_dasd_request(struct blk_mq_hw_ctx *hctx,
|
|
|
|
basedev = block->base;
|
|
spin_lock_irq(&dq->lock);
|
|
- if (basedev->state < DASD_STATE_READY) {
|
|
+ if (basedev->state < DASD_STATE_READY ||
|
|
+ test_bit(DASD_FLAG_OFFLINE, &basedev->flags)) {
|
|
DBF_DEV_EVENT(DBF_ERR, basedev,
|
|
"device not ready for request %p", req);
|
|
rc = BLK_STS_IOERR;
|
|
@@ -3503,8 +3504,6 @@ void dasd_generic_remove(struct ccw_device *cdev)
|
|
struct dasd_device *device;
|
|
struct dasd_block *block;
|
|
|
|
- cdev->handler = NULL;
|
|
-
|
|
device = dasd_device_from_cdev(cdev);
|
|
if (IS_ERR(device)) {
|
|
dasd_remove_sysfs_files(cdev);
|
|
@@ -3523,6 +3522,7 @@ void dasd_generic_remove(struct ccw_device *cdev)
|
|
* no quite down yet.
|
|
*/
|
|
dasd_set_target_state(device, DASD_STATE_NEW);
|
|
+ cdev->handler = NULL;
|
|
/* dasd_delete_device destroys the device reference. */
|
|
block = device->block;
|
|
dasd_delete_device(device);
|
|
diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
|
|
index 68106be4ba7a1..767ac41686fe2 100644
|
|
--- a/drivers/s390/cio/vfio_ccw_ops.c
|
|
+++ b/drivers/s390/cio/vfio_ccw_ops.c
|
|
@@ -543,7 +543,7 @@ static ssize_t vfio_ccw_mdev_ioctl(struct mdev_device *mdev,
|
|
if (ret)
|
|
return ret;
|
|
|
|
- return copy_to_user((void __user *)arg, &info, minsz);
|
|
+ return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
|
|
}
|
|
case VFIO_DEVICE_GET_REGION_INFO:
|
|
{
|
|
@@ -561,7 +561,7 @@ static ssize_t vfio_ccw_mdev_ioctl(struct mdev_device *mdev,
|
|
if (ret)
|
|
return ret;
|
|
|
|
- return copy_to_user((void __user *)arg, &info, minsz);
|
|
+ return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
|
|
}
|
|
case VFIO_DEVICE_GET_IRQ_INFO:
|
|
{
|
|
@@ -582,7 +582,7 @@ static ssize_t vfio_ccw_mdev_ioctl(struct mdev_device *mdev,
|
|
if (info.count == -1)
|
|
return -EINVAL;
|
|
|
|
- return copy_to_user((void __user *)arg, &info, minsz);
|
|
+ return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
|
|
}
|
|
case VFIO_DEVICE_SET_IRQS:
|
|
{
|
|
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
|
|
index 41fc2e4135fe1..1ffdd411201cd 100644
|
|
--- a/drivers/s390/crypto/vfio_ap_ops.c
|
|
+++ b/drivers/s390/crypto/vfio_ap_ops.c
|
|
@@ -1286,7 +1286,7 @@ static int vfio_ap_mdev_get_device_info(unsigned long arg)
|
|
info.num_regions = 0;
|
|
info.num_irqs = 0;
|
|
|
|
- return copy_to_user((void __user *)arg, &info, minsz);
|
|
+ return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
|
|
}
|
|
|
|
static ssize_t vfio_ap_mdev_ioctl(struct mdev_device *mdev,
|
|
diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
|
|
index 28f637042d444..7aed775ee874e 100644
|
|
--- a/drivers/s390/net/qeth_core.h
|
|
+++ b/drivers/s390/net/qeth_core.h
|
|
@@ -436,7 +436,7 @@ struct qeth_qdio_out_buffer {
|
|
int is_header[QDIO_MAX_ELEMENTS_PER_BUFFER];
|
|
|
|
struct qeth_qdio_out_q *q;
|
|
- struct qeth_qdio_out_buffer *next_pending;
|
|
+ struct list_head list_entry;
|
|
};
|
|
|
|
struct qeth_card;
|
|
@@ -500,6 +500,7 @@ struct qeth_qdio_out_q {
|
|
struct qdio_buffer *qdio_bufs[QDIO_MAX_BUFFERS_PER_Q];
|
|
struct qeth_qdio_out_buffer *bufs[QDIO_MAX_BUFFERS_PER_Q];
|
|
struct qdio_outbuf_state *bufstates; /* convenience pointer */
|
|
+ struct list_head pending_bufs;
|
|
struct qeth_out_q_stats stats;
|
|
spinlock_t lock;
|
|
unsigned int priority;
|
|
diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
|
|
index cf18d87da41e2..5e4dcc9aae1b6 100644
|
|
--- a/drivers/s390/net/qeth_core_main.c
|
|
+++ b/drivers/s390/net/qeth_core_main.c
|
|
@@ -73,8 +73,6 @@ static void qeth_free_qdio_queues(struct qeth_card *card);
|
|
static void qeth_notify_skbs(struct qeth_qdio_out_q *queue,
|
|
struct qeth_qdio_out_buffer *buf,
|
|
enum iucv_tx_notify notification);
|
|
-static void qeth_tx_complete_buf(struct qeth_qdio_out_buffer *buf, bool error,
|
|
- int budget);
|
|
|
|
static void qeth_close_dev_handler(struct work_struct *work)
|
|
{
|
|
@@ -465,41 +463,6 @@ static enum iucv_tx_notify qeth_compute_cq_notification(int sbalf15,
|
|
return n;
|
|
}
|
|
|
|
-static void qeth_cleanup_handled_pending(struct qeth_qdio_out_q *q, int bidx,
|
|
- int forced_cleanup)
|
|
-{
|
|
- if (q->card->options.cq != QETH_CQ_ENABLED)
|
|
- return;
|
|
-
|
|
- if (q->bufs[bidx]->next_pending != NULL) {
|
|
- struct qeth_qdio_out_buffer *head = q->bufs[bidx];
|
|
- struct qeth_qdio_out_buffer *c = q->bufs[bidx]->next_pending;
|
|
-
|
|
- while (c) {
|
|
- if (forced_cleanup ||
|
|
- atomic_read(&c->state) == QETH_QDIO_BUF_EMPTY) {
|
|
- struct qeth_qdio_out_buffer *f = c;
|
|
-
|
|
- QETH_CARD_TEXT(f->q->card, 5, "fp");
|
|
- QETH_CARD_TEXT_(f->q->card, 5, "%lx", (long) f);
|
|
- /* release here to avoid interleaving between
|
|
- outbound tasklet and inbound tasklet
|
|
- regarding notifications and lifecycle */
|
|
- qeth_tx_complete_buf(c, forced_cleanup, 0);
|
|
-
|
|
- c = f->next_pending;
|
|
- WARN_ON_ONCE(head->next_pending != f);
|
|
- head->next_pending = c;
|
|
- kmem_cache_free(qeth_qdio_outbuf_cache, f);
|
|
- } else {
|
|
- head = c;
|
|
- c = c->next_pending;
|
|
- }
|
|
-
|
|
- }
|
|
- }
|
|
-}
|
|
-
|
|
static void qeth_qdio_handle_aob(struct qeth_card *card,
|
|
unsigned long phys_aob_addr)
|
|
{
|
|
@@ -507,6 +470,7 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
|
|
struct qaob *aob;
|
|
struct qeth_qdio_out_buffer *buffer;
|
|
enum iucv_tx_notify notification;
|
|
+ struct qeth_qdio_out_q *queue;
|
|
unsigned int i;
|
|
|
|
aob = (struct qaob *) phys_to_virt(phys_aob_addr);
|
|
@@ -537,7 +501,7 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
|
|
qeth_notify_skbs(buffer->q, buffer, notification);
|
|
|
|
/* Free dangling allocations. The attached skbs are handled by
|
|
- * qeth_cleanup_handled_pending().
|
|
+ * qeth_tx_complete_pending_bufs().
|
|
*/
|
|
for (i = 0;
|
|
i < aob->sb_count && i < QETH_MAX_BUFFER_ELEMENTS(card);
|
|
@@ -549,7 +513,9 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
|
|
buffer->is_header[i] = 0;
|
|
}
|
|
|
|
+ queue = buffer->q;
|
|
atomic_set(&buffer->state, QETH_QDIO_BUF_EMPTY);
|
|
+ napi_schedule(&queue->napi);
|
|
break;
|
|
default:
|
|
WARN_ON_ONCE(1);
|
|
@@ -1420,9 +1386,6 @@ static void qeth_tx_complete_buf(struct qeth_qdio_out_buffer *buf, bool error,
|
|
struct qeth_qdio_out_q *queue = buf->q;
|
|
struct sk_buff *skb;
|
|
|
|
- if (atomic_read(&buf->state) == QETH_QDIO_BUF_PENDING)
|
|
- qeth_notify_skbs(queue, buf, TX_NOTIFY_GENERALERROR);
|
|
-
|
|
/* Empty buffer? */
|
|
if (buf->next_element_to_fill == 0)
|
|
return;
|
|
@@ -1484,14 +1447,38 @@ static void qeth_clear_output_buffer(struct qeth_qdio_out_q *queue,
|
|
atomic_set(&buf->state, QETH_QDIO_BUF_EMPTY);
|
|
}
|
|
|
|
+static void qeth_tx_complete_pending_bufs(struct qeth_card *card,
|
|
+ struct qeth_qdio_out_q *queue,
|
|
+ bool drain)
|
|
+{
|
|
+ struct qeth_qdio_out_buffer *buf, *tmp;
|
|
+
|
|
+ list_for_each_entry_safe(buf, tmp, &queue->pending_bufs, list_entry) {
|
|
+ if (drain || atomic_read(&buf->state) == QETH_QDIO_BUF_EMPTY) {
|
|
+ QETH_CARD_TEXT(card, 5, "fp");
|
|
+ QETH_CARD_TEXT_(card, 5, "%lx", (long) buf);
|
|
+
|
|
+ if (drain)
|
|
+ qeth_notify_skbs(queue, buf,
|
|
+ TX_NOTIFY_GENERALERROR);
|
|
+ qeth_tx_complete_buf(buf, drain, 0);
|
|
+
|
|
+ list_del(&buf->list_entry);
|
|
+ kmem_cache_free(qeth_qdio_outbuf_cache, buf);
|
|
+ }
|
|
+ }
|
|
+}
|
|
+
|
|
static void qeth_drain_output_queue(struct qeth_qdio_out_q *q, bool free)
|
|
{
|
|
int j;
|
|
|
|
+ qeth_tx_complete_pending_bufs(q->card, q, true);
|
|
+
|
|
for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) {
|
|
if (!q->bufs[j])
|
|
continue;
|
|
- qeth_cleanup_handled_pending(q, j, 1);
|
|
+
|
|
qeth_clear_output_buffer(q, q->bufs[j], true, 0);
|
|
if (free) {
|
|
kmem_cache_free(qeth_qdio_outbuf_cache, q->bufs[j]);
|
|
@@ -2611,7 +2598,6 @@ static int qeth_init_qdio_out_buf(struct qeth_qdio_out_q *q, int bidx)
|
|
skb_queue_head_init(&newbuf->skb_list);
|
|
lockdep_set_class(&newbuf->skb_list.lock, &qdio_out_skb_queue_key);
|
|
newbuf->q = q;
|
|
- newbuf->next_pending = q->bufs[bidx];
|
|
atomic_set(&newbuf->state, QETH_QDIO_BUF_EMPTY);
|
|
q->bufs[bidx] = newbuf;
|
|
return 0;
|
|
@@ -2630,15 +2616,28 @@ static void qeth_free_output_queue(struct qeth_qdio_out_q *q)
|
|
static struct qeth_qdio_out_q *qeth_alloc_output_queue(void)
|
|
{
|
|
struct qeth_qdio_out_q *q = kzalloc(sizeof(*q), GFP_KERNEL);
|
|
+ unsigned int i;
|
|
|
|
if (!q)
|
|
return NULL;
|
|
|
|
- if (qdio_alloc_buffers(q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q)) {
|
|
- kfree(q);
|
|
- return NULL;
|
|
+ if (qdio_alloc_buffers(q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q))
|
|
+ goto err_qdio_bufs;
|
|
+
|
|
+ for (i = 0; i < QDIO_MAX_BUFFERS_PER_Q; i++) {
|
|
+ if (qeth_init_qdio_out_buf(q, i))
|
|
+ goto err_out_bufs;
|
|
}
|
|
+
|
|
return q;
|
|
+
|
|
+err_out_bufs:
|
|
+ while (i > 0)
|
|
+ kmem_cache_free(qeth_qdio_outbuf_cache, q->bufs[--i]);
|
|
+ qdio_free_buffers(q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q);
|
|
+err_qdio_bufs:
|
|
+ kfree(q);
|
|
+ return NULL;
|
|
}
|
|
|
|
static void qeth_tx_completion_timer(struct timer_list *timer)
|
|
@@ -2651,7 +2650,7 @@ static void qeth_tx_completion_timer(struct timer_list *timer)
|
|
|
|
static int qeth_alloc_qdio_queues(struct qeth_card *card)
|
|
{
|
|
- int i, j;
|
|
+ unsigned int i;
|
|
|
|
QETH_CARD_TEXT(card, 2, "allcqdbf");
|
|
|
|
@@ -2680,18 +2679,12 @@ static int qeth_alloc_qdio_queues(struct qeth_card *card)
|
|
card->qdio.out_qs[i] = queue;
|
|
queue->card = card;
|
|
queue->queue_no = i;
|
|
+ INIT_LIST_HEAD(&queue->pending_bufs);
|
|
spin_lock_init(&queue->lock);
|
|
timer_setup(&queue->timer, qeth_tx_completion_timer, 0);
|
|
queue->coalesce_usecs = QETH_TX_COALESCE_USECS;
|
|
queue->max_coalesced_frames = QETH_TX_MAX_COALESCED_FRAMES;
|
|
queue->priority = QETH_QIB_PQUE_PRIO_DEFAULT;
|
|
-
|
|
- /* give outbound qeth_qdio_buffers their qdio_buffers */
|
|
- for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) {
|
|
- WARN_ON(queue->bufs[j]);
|
|
- if (qeth_init_qdio_out_buf(queue, j))
|
|
- goto out_freeoutqbufs;
|
|
- }
|
|
}
|
|
|
|
/* completion */
|
|
@@ -2700,13 +2693,6 @@ static int qeth_alloc_qdio_queues(struct qeth_card *card)
|
|
|
|
return 0;
|
|
|
|
-out_freeoutqbufs:
|
|
- while (j > 0) {
|
|
- --j;
|
|
- kmem_cache_free(qeth_qdio_outbuf_cache,
|
|
- card->qdio.out_qs[i]->bufs[j]);
|
|
- card->qdio.out_qs[i]->bufs[j] = NULL;
|
|
- }
|
|
out_freeoutq:
|
|
while (i > 0) {
|
|
qeth_free_output_queue(card->qdio.out_qs[--i]);
|
|
@@ -6100,6 +6086,8 @@ static void qeth_iqd_tx_complete(struct qeth_qdio_out_q *queue,
|
|
qeth_schedule_recovery(card);
|
|
}
|
|
|
|
+ list_add(&buffer->list_entry,
|
|
+ &queue->pending_bufs);
|
|
/* Skip clearing the buffer: */
|
|
return;
|
|
case QETH_QDIO_BUF_QAOB_OK:
|
|
@@ -6155,6 +6143,8 @@ static int qeth_tx_poll(struct napi_struct *napi, int budget)
|
|
unsigned int bytes = 0;
|
|
int completed;
|
|
|
|
+ qeth_tx_complete_pending_bufs(card, queue, false);
|
|
+
|
|
if (qeth_out_queue_is_empty(queue)) {
|
|
napi_complete(napi);
|
|
return 0;
|
|
@@ -6187,7 +6177,6 @@ static int qeth_tx_poll(struct napi_struct *napi, int budget)
|
|
|
|
qeth_handle_send_error(card, buffer, error);
|
|
qeth_iqd_tx_complete(queue, bidx, error, budget);
|
|
- qeth_cleanup_handled_pending(queue, bidx, false);
|
|
}
|
|
|
|
netdev_tx_completed_queue(txq, packets, bytes);
|
|
@@ -7239,9 +7228,7 @@ int qeth_open(struct net_device *dev)
|
|
card->data.state = CH_STATE_UP;
|
|
netif_tx_start_all_queues(dev);
|
|
|
|
- napi_enable(&card->napi);
|
|
local_bh_disable();
|
|
- napi_schedule(&card->napi);
|
|
if (IS_IQD(card)) {
|
|
struct qeth_qdio_out_q *queue;
|
|
unsigned int i;
|
|
@@ -7253,8 +7240,12 @@ int qeth_open(struct net_device *dev)
|
|
napi_schedule(&queue->napi);
|
|
}
|
|
}
|
|
+
|
|
+ napi_enable(&card->napi);
|
|
+ napi_schedule(&card->napi);
|
|
/* kick-start the NAPI softirq: */
|
|
local_bh_enable();
|
|
+
|
|
return 0;
|
|
}
|
|
EXPORT_SYMBOL_GPL(qeth_open);
|
|
@@ -7264,6 +7255,11 @@ int qeth_stop(struct net_device *dev)
|
|
struct qeth_card *card = dev->ml_priv;
|
|
|
|
QETH_CARD_TEXT(card, 4, "qethstop");
|
|
+
|
|
+ napi_disable(&card->napi);
|
|
+ cancel_delayed_work_sync(&card->buffer_reclaim_work);
|
|
+ qdio_stop_irq(CARD_DDEV(card));
|
|
+
|
|
if (IS_IQD(card)) {
|
|
struct qeth_qdio_out_q *queue;
|
|
unsigned int i;
|
|
@@ -7284,10 +7280,6 @@ int qeth_stop(struct net_device *dev)
|
|
netif_tx_disable(dev);
|
|
}
|
|
|
|
- napi_disable(&card->napi);
|
|
- cancel_delayed_work_sync(&card->buffer_reclaim_work);
|
|
- qdio_stop_irq(CARD_DDEV(card));
|
|
-
|
|
return 0;
|
|
}
|
|
EXPORT_SYMBOL_GPL(qeth_stop);
|
|
diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
|
|
index 1851015299b3a..af40de7e51e7d 100644
|
|
--- a/drivers/scsi/libiscsi.c
|
|
+++ b/drivers/scsi/libiscsi.c
|
|
@@ -1532,14 +1532,9 @@ check_mgmt:
|
|
}
|
|
rc = iscsi_prep_scsi_cmd_pdu(conn->task);
|
|
if (rc) {
|
|
- if (rc == -ENOMEM || rc == -EACCES) {
|
|
- spin_lock_bh(&conn->taskqueuelock);
|
|
- list_add_tail(&conn->task->running,
|
|
- &conn->cmdqueue);
|
|
- conn->task = NULL;
|
|
- spin_unlock_bh(&conn->taskqueuelock);
|
|
- goto done;
|
|
- } else
|
|
+ if (rc == -ENOMEM || rc == -EACCES)
|
|
+ fail_scsi_task(conn->task, DID_IMM_RETRY);
|
|
+ else
|
|
fail_scsi_task(conn->task, DID_ABORT);
|
|
spin_lock_bh(&conn->taskqueuelock);
|
|
continue;
|
|
diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
|
|
index dd15246d5b037..ea43dff40a856 100644
|
|
--- a/drivers/scsi/pm8001/pm8001_hwi.c
|
|
+++ b/drivers/scsi/pm8001/pm8001_hwi.c
|
|
@@ -3038,8 +3038,8 @@ void pm8001_mpi_set_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
|
|
complete(pm8001_ha->nvmd_completion);
|
|
pm8001_dbg(pm8001_ha, MSG, "Set nvm data complete!\n");
|
|
if ((dlen_status & NVMD_STAT) != 0) {
|
|
- pm8001_dbg(pm8001_ha, FAIL, "Set nvm data error!\n");
|
|
- return;
|
|
+ pm8001_dbg(pm8001_ha, FAIL, "Set nvm data error %x\n",
|
|
+ dlen_status);
|
|
}
|
|
ccb->task = NULL;
|
|
ccb->ccb_tag = 0xFFFFFFFF;
|
|
@@ -3062,11 +3062,17 @@ pm8001_mpi_get_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
|
|
|
|
pm8001_dbg(pm8001_ha, MSG, "Get nvm data complete!\n");
|
|
if ((dlen_status & NVMD_STAT) != 0) {
|
|
- pm8001_dbg(pm8001_ha, FAIL, "Get nvm data error!\n");
|
|
+ pm8001_dbg(pm8001_ha, FAIL, "Get nvm data error %x\n",
|
|
+ dlen_status);
|
|
complete(pm8001_ha->nvmd_completion);
|
|
+ /* We should free tag during failure also, the tag is not being
|
|
+ * freed by requesting path anywhere.
|
|
+ */
|
|
+ ccb->task = NULL;
|
|
+ ccb->ccb_tag = 0xFFFFFFFF;
|
|
+ pm8001_tag_free(pm8001_ha, tag);
|
|
return;
|
|
}
|
|
-
|
|
if (ir_tds_bn_dps_das_nvm & IPMode) {
|
|
/* indirect mode - IR bit set */
|
|
pm8001_dbg(pm8001_ha, MSG, "Get NVMD success, IR=1\n");
|
|
diff --git a/drivers/scsi/ufs/ufs-sysfs.c b/drivers/scsi/ufs/ufs-sysfs.c
|
|
index 08e72b7eef6aa..50e90416262bc 100644
|
|
--- a/drivers/scsi/ufs/ufs-sysfs.c
|
|
+++ b/drivers/scsi/ufs/ufs-sysfs.c
|
|
@@ -792,7 +792,8 @@ static ssize_t _pname##_show(struct device *dev, \
|
|
struct scsi_device *sdev = to_scsi_device(dev); \
|
|
struct ufs_hba *hba = shost_priv(sdev->host); \
|
|
u8 lun = ufshcd_scsi_to_upiu_lun(sdev->lun); \
|
|
- if (!ufs_is_valid_unit_desc_lun(&hba->dev_info, lun)) \
|
|
+ if (!ufs_is_valid_unit_desc_lun(&hba->dev_info, lun, \
|
|
+ _duname##_DESC_PARAM##_puname)) \
|
|
return -EINVAL; \
|
|
return ufs_sysfs_read_desc_param(hba, QUERY_DESC_IDN_##_duname, \
|
|
lun, _duname##_DESC_PARAM##_puname, buf, _size); \
|
|
diff --git a/drivers/scsi/ufs/ufs.h b/drivers/scsi/ufs/ufs.h
|
|
index 14dfda735adf5..580aa56965d06 100644
|
|
--- a/drivers/scsi/ufs/ufs.h
|
|
+++ b/drivers/scsi/ufs/ufs.h
|
|
@@ -552,13 +552,15 @@ struct ufs_dev_info {
|
|
* @return: true if the lun has a matching unit descriptor, false otherwise
|
|
*/
|
|
static inline bool ufs_is_valid_unit_desc_lun(struct ufs_dev_info *dev_info,
|
|
- u8 lun)
|
|
+ u8 lun, u8 param_offset)
|
|
{
|
|
if (!dev_info || !dev_info->max_lu_supported) {
|
|
pr_err("Max General LU supported by UFS isn't initialized\n");
|
|
return false;
|
|
}
|
|
-
|
|
+ /* WB is available only for the logical unit from 0 to 7 */
|
|
+ if (param_offset == UNIT_DESC_PARAM_WB_BUF_ALLOC_UNITS)
|
|
+ return lun < UFS_UPIU_MAX_WB_LUN_ID;
|
|
return lun == UFS_UPIU_RPMB_WLUN || (lun < dev_info->max_lu_supported);
|
|
}
|
|
|
|
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
|
|
index 428b9e0ac47e9..16e1bd1aa49d5 100644
|
|
--- a/drivers/scsi/ufs/ufshcd.c
|
|
+++ b/drivers/scsi/ufs/ufshcd.c
|
|
@@ -1184,19 +1184,30 @@ static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba)
|
|
*/
|
|
ufshcd_scsi_block_requests(hba);
|
|
down_write(&hba->clk_scaling_lock);
|
|
- if (ufshcd_wait_for_doorbell_clr(hba, DOORBELL_CLR_TOUT_US)) {
|
|
+
|
|
+ if (!hba->clk_scaling.is_allowed ||
|
|
+ ufshcd_wait_for_doorbell_clr(hba, DOORBELL_CLR_TOUT_US)) {
|
|
ret = -EBUSY;
|
|
up_write(&hba->clk_scaling_lock);
|
|
ufshcd_scsi_unblock_requests(hba);
|
|
+ goto out;
|
|
}
|
|
|
|
+ /* let's not get into low power until clock scaling is completed */
|
|
+ ufshcd_hold(hba, false);
|
|
+
|
|
+out:
|
|
return ret;
|
|
}
|
|
|
|
-static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba)
|
|
+static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, bool writelock)
|
|
{
|
|
- up_write(&hba->clk_scaling_lock);
|
|
+ if (writelock)
|
|
+ up_write(&hba->clk_scaling_lock);
|
|
+ else
|
|
+ up_read(&hba->clk_scaling_lock);
|
|
ufshcd_scsi_unblock_requests(hba);
|
|
+ ufshcd_release(hba);
|
|
}
|
|
|
|
/**
|
|
@@ -1211,13 +1222,11 @@ static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba)
|
|
static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
|
|
{
|
|
int ret = 0;
|
|
-
|
|
- /* let's not get into low power until clock scaling is completed */
|
|
- ufshcd_hold(hba, false);
|
|
+ bool is_writelock = true;
|
|
|
|
ret = ufshcd_clock_scaling_prepare(hba);
|
|
if (ret)
|
|
- goto out;
|
|
+ return ret;
|
|
|
|
/* scale down the gear before scaling down clocks */
|
|
if (!scale_up) {
|
|
@@ -1243,14 +1252,12 @@ static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
|
|
}
|
|
|
|
/* Enable Write Booster if we have scaled up else disable it */
|
|
- up_write(&hba->clk_scaling_lock);
|
|
+ downgrade_write(&hba->clk_scaling_lock);
|
|
+ is_writelock = false;
|
|
ufshcd_wb_ctrl(hba, scale_up);
|
|
- down_write(&hba->clk_scaling_lock);
|
|
|
|
out_unprepare:
|
|
- ufshcd_clock_scaling_unprepare(hba);
|
|
-out:
|
|
- ufshcd_release(hba);
|
|
+ ufshcd_clock_scaling_unprepare(hba, is_writelock);
|
|
return ret;
|
|
}
|
|
|
|
@@ -1524,7 +1531,7 @@ static ssize_t ufshcd_clkscale_enable_show(struct device *dev,
|
|
{
|
|
struct ufs_hba *hba = dev_get_drvdata(dev);
|
|
|
|
- return snprintf(buf, PAGE_SIZE, "%d\n", hba->clk_scaling.is_allowed);
|
|
+ return snprintf(buf, PAGE_SIZE, "%d\n", hba->clk_scaling.is_enabled);
|
|
}
|
|
|
|
static ssize_t ufshcd_clkscale_enable_store(struct device *dev,
|
|
@@ -1538,7 +1545,7 @@ static ssize_t ufshcd_clkscale_enable_store(struct device *dev,
|
|
return -EINVAL;
|
|
|
|
value = !!value;
|
|
- if (value == hba->clk_scaling.is_allowed)
|
|
+ if (value == hba->clk_scaling.is_enabled)
|
|
goto out;
|
|
|
|
pm_runtime_get_sync(hba->dev);
|
|
@@ -1547,7 +1554,7 @@ static ssize_t ufshcd_clkscale_enable_store(struct device *dev,
|
|
cancel_work_sync(&hba->clk_scaling.suspend_work);
|
|
cancel_work_sync(&hba->clk_scaling.resume_work);
|
|
|
|
- hba->clk_scaling.is_allowed = value;
|
|
+ hba->clk_scaling.is_enabled = value;
|
|
|
|
if (value) {
|
|
ufshcd_resume_clkscaling(hba);
|
|
@@ -1885,8 +1892,6 @@ static void ufshcd_init_clk_scaling(struct ufs_hba *hba)
|
|
snprintf(wq_name, sizeof(wq_name), "ufs_clkscaling_%d",
|
|
hba->host->host_no);
|
|
hba->clk_scaling.workq = create_singlethread_workqueue(wq_name);
|
|
-
|
|
- ufshcd_clkscaling_init_sysfs(hba);
|
|
}
|
|
|
|
static void ufshcd_exit_clk_scaling(struct ufs_hba *hba)
|
|
@@ -1894,6 +1899,8 @@ static void ufshcd_exit_clk_scaling(struct ufs_hba *hba)
|
|
if (!ufshcd_is_clkscaling_supported(hba))
|
|
return;
|
|
|
|
+ if (hba->clk_scaling.enable_attr.attr.name)
|
|
+ device_remove_file(hba->dev, &hba->clk_scaling.enable_attr);
|
|
destroy_workqueue(hba->clk_scaling.workq);
|
|
ufshcd_devfreq_remove(hba);
|
|
}
|
|
@@ -1958,7 +1965,7 @@ static void ufshcd_clk_scaling_start_busy(struct ufs_hba *hba)
|
|
if (!hba->clk_scaling.active_reqs++)
|
|
queue_resume_work = true;
|
|
|
|
- if (!hba->clk_scaling.is_allowed || hba->pm_op_in_progress)
|
|
+ if (!hba->clk_scaling.is_enabled || hba->pm_op_in_progress)
|
|
return;
|
|
|
|
if (queue_resume_work)
|
|
@@ -3427,7 +3434,7 @@ static inline int ufshcd_read_unit_desc_param(struct ufs_hba *hba,
|
|
* Unit descriptors are only available for general purpose LUs (LUN id
|
|
* from 0 to 7) and RPMB Well known LU.
|
|
*/
|
|
- if (!ufs_is_valid_unit_desc_lun(&hba->dev_info, lun))
|
|
+ if (!ufs_is_valid_unit_desc_lun(&hba->dev_info, lun, param_offset))
|
|
return -EOPNOTSUPP;
|
|
|
|
return ufshcd_read_desc_param(hba, QUERY_DESC_IDN_UNIT, lun,
|
|
@@ -5744,18 +5751,24 @@ static void ufshcd_err_handling_prepare(struct ufs_hba *hba)
|
|
ufshcd_vops_resume(hba, pm_op);
|
|
} else {
|
|
ufshcd_hold(hba, false);
|
|
- if (hba->clk_scaling.is_allowed) {
|
|
+ if (hba->clk_scaling.is_enabled) {
|
|
cancel_work_sync(&hba->clk_scaling.suspend_work);
|
|
cancel_work_sync(&hba->clk_scaling.resume_work);
|
|
ufshcd_suspend_clkscaling(hba);
|
|
}
|
|
+ down_write(&hba->clk_scaling_lock);
|
|
+ hba->clk_scaling.is_allowed = false;
|
|
+ up_write(&hba->clk_scaling_lock);
|
|
}
|
|
}
|
|
|
|
static void ufshcd_err_handling_unprepare(struct ufs_hba *hba)
|
|
{
|
|
ufshcd_release(hba);
|
|
- if (hba->clk_scaling.is_allowed)
|
|
+ down_write(&hba->clk_scaling_lock);
|
|
+ hba->clk_scaling.is_allowed = true;
|
|
+ up_write(&hba->clk_scaling_lock);
|
|
+ if (hba->clk_scaling.is_enabled)
|
|
ufshcd_resume_clkscaling(hba);
|
|
pm_runtime_put(hba->dev);
|
|
}
|
|
@@ -7741,12 +7754,14 @@ static int ufshcd_add_lus(struct ufs_hba *hba)
|
|
sizeof(struct ufs_pa_layer_attr));
|
|
hba->clk_scaling.saved_pwr_info.is_valid = true;
|
|
if (!hba->devfreq) {
|
|
+ hba->clk_scaling.is_allowed = true;
|
|
ret = ufshcd_devfreq_init(hba);
|
|
if (ret)
|
|
goto out;
|
|
- }
|
|
|
|
- hba->clk_scaling.is_allowed = true;
|
|
+ hba->clk_scaling.is_enabled = true;
|
|
+ ufshcd_clkscaling_init_sysfs(hba);
|
|
+ }
|
|
}
|
|
|
|
ufs_bsg_probe(hba);
|
|
@@ -8661,11 +8676,14 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
|
|
ufshcd_hold(hba, false);
|
|
hba->clk_gating.is_suspended = true;
|
|
|
|
- if (hba->clk_scaling.is_allowed) {
|
|
+ if (hba->clk_scaling.is_enabled) {
|
|
cancel_work_sync(&hba->clk_scaling.suspend_work);
|
|
cancel_work_sync(&hba->clk_scaling.resume_work);
|
|
ufshcd_suspend_clkscaling(hba);
|
|
}
|
|
+ down_write(&hba->clk_scaling_lock);
|
|
+ hba->clk_scaling.is_allowed = false;
|
|
+ up_write(&hba->clk_scaling_lock);
|
|
|
|
if (req_dev_pwr_mode == UFS_ACTIVE_PWR_MODE &&
|
|
req_link_state == UIC_LINK_ACTIVE_STATE) {
|
|
@@ -8762,8 +8780,6 @@ disable_clks:
|
|
goto out;
|
|
|
|
set_link_active:
|
|
- if (hba->clk_scaling.is_allowed)
|
|
- ufshcd_resume_clkscaling(hba);
|
|
ufshcd_vreg_set_hpm(hba);
|
|
/*
|
|
* Device hardware reset is required to exit DeepSleep. Also, for
|
|
@@ -8787,7 +8803,10 @@ set_dev_active:
|
|
if (!ufshcd_set_dev_pwr_mode(hba, UFS_ACTIVE_PWR_MODE))
|
|
ufshcd_disable_auto_bkops(hba);
|
|
enable_gating:
|
|
- if (hba->clk_scaling.is_allowed)
|
|
+ down_write(&hba->clk_scaling_lock);
|
|
+ hba->clk_scaling.is_allowed = true;
|
|
+ up_write(&hba->clk_scaling_lock);
|
|
+ if (hba->clk_scaling.is_enabled)
|
|
ufshcd_resume_clkscaling(hba);
|
|
hba->clk_gating.is_suspended = false;
|
|
hba->dev_info.b_rpm_dev_flush_capable = false;
|
|
@@ -8891,7 +8910,10 @@ static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
|
|
|
|
hba->clk_gating.is_suspended = false;
|
|
|
|
- if (hba->clk_scaling.is_allowed)
|
|
+ down_write(&hba->clk_scaling_lock);
|
|
+ hba->clk_scaling.is_allowed = true;
|
|
+ up_write(&hba->clk_scaling_lock);
|
|
+ if (hba->clk_scaling.is_enabled)
|
|
ufshcd_resume_clkscaling(hba);
|
|
|
|
/* Enable Auto-Hibernate if configured */
|
|
@@ -8917,8 +8939,6 @@ disable_vreg:
|
|
ufshcd_vreg_set_lpm(hba);
|
|
disable_irq_and_vops_clks:
|
|
ufshcd_disable_irq(hba);
|
|
- if (hba->clk_scaling.is_allowed)
|
|
- ufshcd_suspend_clkscaling(hba);
|
|
ufshcd_setup_clocks(hba, false);
|
|
if (ufshcd_is_clkgating_allowed(hba)) {
|
|
hba->clk_gating.state = CLKS_OFF;
|
|
@@ -9155,8 +9175,6 @@ void ufshcd_remove(struct ufs_hba *hba)
|
|
|
|
ufshcd_exit_clk_scaling(hba);
|
|
ufshcd_exit_clk_gating(hba);
|
|
- if (ufshcd_is_clkscaling_supported(hba))
|
|
- device_remove_file(hba->dev, &hba->clk_scaling.enable_attr);
|
|
ufshcd_hba_exit(hba);
|
|
}
|
|
EXPORT_SYMBOL_GPL(ufshcd_remove);
|
|
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
|
|
index 1885ec9126c44..7d0b00f237614 100644
|
|
--- a/drivers/scsi/ufs/ufshcd.h
|
|
+++ b/drivers/scsi/ufs/ufshcd.h
|
|
@@ -419,7 +419,10 @@ struct ufs_saved_pwr_info {
|
|
* @suspend_work: worker to suspend devfreq
|
|
* @resume_work: worker to resume devfreq
|
|
* @min_gear: lowest HS gear to scale down to
|
|
- * @is_allowed: tracks if scaling is currently allowed or not
|
|
+ * @is_enabled: tracks if scaling is currently enabled or not, controlled by
|
|
+ clkscale_enable sysfs node
|
|
+ * @is_allowed: tracks if scaling is currently allowed or not, used to block
|
|
+ clock scaling which is not invoked from devfreq governor
|
|
* @is_busy_started: tracks if busy period has started or not
|
|
* @is_suspended: tracks if devfreq is suspended or not
|
|
*/
|
|
@@ -434,6 +437,7 @@ struct ufs_clk_scaling {
|
|
struct work_struct suspend_work;
|
|
struct work_struct resume_work;
|
|
u32 min_gear;
|
|
+ bool is_enabled;
|
|
bool is_allowed;
|
|
bool is_busy_started;
|
|
bool is_suspended;
|
|
diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
|
|
index 6eeb39669a866..53c4311cc6ab5 100644
|
|
--- a/drivers/spi/spi-stm32.c
|
|
+++ b/drivers/spi/spi-stm32.c
|
|
@@ -928,8 +928,8 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
|
|
mask |= STM32H7_SPI_SR_RXP;
|
|
|
|
if (!(sr & mask)) {
|
|
- dev_dbg(spi->dev, "spurious IT (sr=0x%08x, ier=0x%08x)\n",
|
|
- sr, ier);
|
|
+ dev_warn(spi->dev, "spurious IT (sr=0x%08x, ier=0x%08x)\n",
|
|
+ sr, ier);
|
|
spin_unlock_irqrestore(&spi->lock, flags);
|
|
return IRQ_NONE;
|
|
}
|
|
@@ -956,15 +956,8 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
|
|
}
|
|
|
|
if (sr & STM32H7_SPI_SR_OVR) {
|
|
- dev_warn(spi->dev, "Overrun: received value discarded\n");
|
|
- if (!spi->cur_usedma && (spi->rx_buf && (spi->rx_len > 0)))
|
|
- stm32h7_spi_read_rxfifo(spi, false);
|
|
- /*
|
|
- * If overrun is detected while using DMA, it means that
|
|
- * something went wrong, so stop the current transfer
|
|
- */
|
|
- if (spi->cur_usedma)
|
|
- end = true;
|
|
+ dev_err(spi->dev, "Overrun: RX data lost\n");
|
|
+ end = true;
|
|
}
|
|
|
|
if (sr & STM32H7_SPI_SR_EOT) {
|
|
diff --git a/drivers/staging/comedi/drivers/addi_apci_1032.c b/drivers/staging/comedi/drivers/addi_apci_1032.c
|
|
index 35b75f0c9200b..81a246fbcc01f 100644
|
|
--- a/drivers/staging/comedi/drivers/addi_apci_1032.c
|
|
+++ b/drivers/staging/comedi/drivers/addi_apci_1032.c
|
|
@@ -260,6 +260,7 @@ static irqreturn_t apci1032_interrupt(int irq, void *d)
|
|
struct apci1032_private *devpriv = dev->private;
|
|
struct comedi_subdevice *s = dev->read_subdev;
|
|
unsigned int ctrl;
|
|
+ unsigned short val;
|
|
|
|
/* check interrupt is from this device */
|
|
if ((inl(devpriv->amcc_iobase + AMCC_OP_REG_INTCSR) &
|
|
@@ -275,7 +276,8 @@ static irqreturn_t apci1032_interrupt(int irq, void *d)
|
|
outl(ctrl & ~APCI1032_CTRL_INT_ENA, dev->iobase + APCI1032_CTRL_REG);
|
|
|
|
s->state = inl(dev->iobase + APCI1032_STATUS_REG) & 0xffff;
|
|
- comedi_buf_write_samples(s, &s->state, 1);
|
|
+ val = s->state;
|
|
+ comedi_buf_write_samples(s, &val, 1);
|
|
comedi_handle_events(dev, s);
|
|
|
|
/* enable the interrupt */
|
|
diff --git a/drivers/staging/comedi/drivers/addi_apci_1500.c b/drivers/staging/comedi/drivers/addi_apci_1500.c
|
|
index 11efb21555e39..b04c15dcfb575 100644
|
|
--- a/drivers/staging/comedi/drivers/addi_apci_1500.c
|
|
+++ b/drivers/staging/comedi/drivers/addi_apci_1500.c
|
|
@@ -208,7 +208,7 @@ static irqreturn_t apci1500_interrupt(int irq, void *d)
|
|
struct comedi_device *dev = d;
|
|
struct apci1500_private *devpriv = dev->private;
|
|
struct comedi_subdevice *s = dev->read_subdev;
|
|
- unsigned int status = 0;
|
|
+ unsigned short status = 0;
|
|
unsigned int val;
|
|
|
|
val = inl(devpriv->amcc + AMCC_OP_REG_INTCSR);
|
|
@@ -238,14 +238,14 @@ static irqreturn_t apci1500_interrupt(int irq, void *d)
|
|
*
|
|
* Mask Meaning
|
|
* ---------- ------------------------------------------
|
|
- * 0x00000001 Event 1 has occurred
|
|
- * 0x00000010 Event 2 has occurred
|
|
- * 0x00000100 Counter/timer 1 has run down (not implemented)
|
|
- * 0x00001000 Counter/timer 2 has run down (not implemented)
|
|
- * 0x00010000 Counter 3 has run down (not implemented)
|
|
- * 0x00100000 Watchdog has run down (not implemented)
|
|
- * 0x01000000 Voltage error
|
|
- * 0x10000000 Short-circuit error
|
|
+ * 0b00000001 Event 1 has occurred
|
|
+ * 0b00000010 Event 2 has occurred
|
|
+ * 0b00000100 Counter/timer 1 has run down (not implemented)
|
|
+ * 0b00001000 Counter/timer 2 has run down (not implemented)
|
|
+ * 0b00010000 Counter 3 has run down (not implemented)
|
|
+ * 0b00100000 Watchdog has run down (not implemented)
|
|
+ * 0b01000000 Voltage error
|
|
+ * 0b10000000 Short-circuit error
|
|
*/
|
|
comedi_buf_write_samples(s, &status, 1);
|
|
comedi_handle_events(dev, s);
|
|
diff --git a/drivers/staging/comedi/drivers/adv_pci1710.c b/drivers/staging/comedi/drivers/adv_pci1710.c
|
|
index 692893c7e5c3d..090607760be6b 100644
|
|
--- a/drivers/staging/comedi/drivers/adv_pci1710.c
|
|
+++ b/drivers/staging/comedi/drivers/adv_pci1710.c
|
|
@@ -300,11 +300,11 @@ static int pci1710_ai_eoc(struct comedi_device *dev,
|
|
static int pci1710_ai_read_sample(struct comedi_device *dev,
|
|
struct comedi_subdevice *s,
|
|
unsigned int cur_chan,
|
|
- unsigned int *val)
|
|
+ unsigned short *val)
|
|
{
|
|
const struct boardtype *board = dev->board_ptr;
|
|
struct pci1710_private *devpriv = dev->private;
|
|
- unsigned int sample;
|
|
+ unsigned short sample;
|
|
unsigned int chan;
|
|
|
|
sample = inw(dev->iobase + PCI171X_AD_DATA_REG);
|
|
@@ -345,7 +345,7 @@ static int pci1710_ai_insn_read(struct comedi_device *dev,
|
|
pci1710_ai_setup_chanlist(dev, s, &insn->chanspec, 1, 1);
|
|
|
|
for (i = 0; i < insn->n; i++) {
|
|
- unsigned int val;
|
|
+ unsigned short val;
|
|
|
|
/* start conversion */
|
|
outw(0, dev->iobase + PCI171X_SOFTTRG_REG);
|
|
@@ -395,7 +395,7 @@ static void pci1710_handle_every_sample(struct comedi_device *dev,
|
|
{
|
|
struct comedi_cmd *cmd = &s->async->cmd;
|
|
unsigned int status;
|
|
- unsigned int val;
|
|
+ unsigned short val;
|
|
int ret;
|
|
|
|
status = inw(dev->iobase + PCI171X_STATUS_REG);
|
|
@@ -455,7 +455,7 @@ static void pci1710_handle_fifo(struct comedi_device *dev,
|
|
}
|
|
|
|
for (i = 0; i < devpriv->max_samples; i++) {
|
|
- unsigned int val;
|
|
+ unsigned short val;
|
|
int ret;
|
|
|
|
ret = pci1710_ai_read_sample(dev, s, s->async->cur_chan, &val);
|
|
diff --git a/drivers/staging/comedi/drivers/das6402.c b/drivers/staging/comedi/drivers/das6402.c
|
|
index 04e224f8b7793..96f4107b8054d 100644
|
|
--- a/drivers/staging/comedi/drivers/das6402.c
|
|
+++ b/drivers/staging/comedi/drivers/das6402.c
|
|
@@ -186,7 +186,7 @@ static irqreturn_t das6402_interrupt(int irq, void *d)
|
|
if (status & DAS6402_STATUS_FFULL) {
|
|
async->events |= COMEDI_CB_OVERFLOW;
|
|
} else if (status & DAS6402_STATUS_FFNE) {
|
|
- unsigned int val;
|
|
+ unsigned short val;
|
|
|
|
val = das6402_ai_read_sample(dev, s);
|
|
comedi_buf_write_samples(s, &val, 1);
|
|
diff --git a/drivers/staging/comedi/drivers/das800.c b/drivers/staging/comedi/drivers/das800.c
|
|
index 4ea100ff6930f..2881808d6606c 100644
|
|
--- a/drivers/staging/comedi/drivers/das800.c
|
|
+++ b/drivers/staging/comedi/drivers/das800.c
|
|
@@ -427,7 +427,7 @@ static irqreturn_t das800_interrupt(int irq, void *d)
|
|
struct comedi_cmd *cmd;
|
|
unsigned long irq_flags;
|
|
unsigned int status;
|
|
- unsigned int val;
|
|
+ unsigned short val;
|
|
bool fifo_empty;
|
|
bool fifo_overflow;
|
|
int i;
|
|
diff --git a/drivers/staging/comedi/drivers/dmm32at.c b/drivers/staging/comedi/drivers/dmm32at.c
|
|
index 17e6018918bbf..56682f01242fd 100644
|
|
--- a/drivers/staging/comedi/drivers/dmm32at.c
|
|
+++ b/drivers/staging/comedi/drivers/dmm32at.c
|
|
@@ -404,7 +404,7 @@ static irqreturn_t dmm32at_isr(int irq, void *d)
|
|
{
|
|
struct comedi_device *dev = d;
|
|
unsigned char intstat;
|
|
- unsigned int val;
|
|
+ unsigned short val;
|
|
int i;
|
|
|
|
if (!dev->attached) {
|
|
diff --git a/drivers/staging/comedi/drivers/me4000.c b/drivers/staging/comedi/drivers/me4000.c
|
|
index 726e40dc17b62..0d3d4cafce2e8 100644
|
|
--- a/drivers/staging/comedi/drivers/me4000.c
|
|
+++ b/drivers/staging/comedi/drivers/me4000.c
|
|
@@ -924,7 +924,7 @@ static irqreturn_t me4000_ai_isr(int irq, void *dev_id)
|
|
struct comedi_subdevice *s = dev->read_subdev;
|
|
int i;
|
|
int c = 0;
|
|
- unsigned int lval;
|
|
+ unsigned short lval;
|
|
|
|
if (!dev->attached)
|
|
return IRQ_NONE;
|
|
diff --git a/drivers/staging/comedi/drivers/pcl711.c b/drivers/staging/comedi/drivers/pcl711.c
|
|
index 2dbf69e309650..bd6f42fe9e3ca 100644
|
|
--- a/drivers/staging/comedi/drivers/pcl711.c
|
|
+++ b/drivers/staging/comedi/drivers/pcl711.c
|
|
@@ -184,7 +184,7 @@ static irqreturn_t pcl711_interrupt(int irq, void *d)
|
|
struct comedi_device *dev = d;
|
|
struct comedi_subdevice *s = dev->read_subdev;
|
|
struct comedi_cmd *cmd = &s->async->cmd;
|
|
- unsigned int data;
|
|
+ unsigned short data;
|
|
|
|
if (!dev->attached) {
|
|
dev_err(dev->class_dev, "spurious interrupt\n");
|
|
diff --git a/drivers/staging/comedi/drivers/pcl818.c b/drivers/staging/comedi/drivers/pcl818.c
|
|
index 63e3011158f23..f4b4a686c710f 100644
|
|
--- a/drivers/staging/comedi/drivers/pcl818.c
|
|
+++ b/drivers/staging/comedi/drivers/pcl818.c
|
|
@@ -423,7 +423,7 @@ static int pcl818_ai_eoc(struct comedi_device *dev,
|
|
|
|
static bool pcl818_ai_write_sample(struct comedi_device *dev,
|
|
struct comedi_subdevice *s,
|
|
- unsigned int chan, unsigned int val)
|
|
+ unsigned int chan, unsigned short val)
|
|
{
|
|
struct pcl818_private *devpriv = dev->private;
|
|
struct comedi_cmd *cmd = &s->async->cmd;
|
|
diff --git a/drivers/staging/ks7010/ks_wlan_net.c b/drivers/staging/ks7010/ks_wlan_net.c
|
|
index dc09cc6e1c478..09e7b4cd0138c 100644
|
|
--- a/drivers/staging/ks7010/ks_wlan_net.c
|
|
+++ b/drivers/staging/ks7010/ks_wlan_net.c
|
|
@@ -1120,6 +1120,7 @@ static int ks_wlan_set_scan(struct net_device *dev,
|
|
{
|
|
struct ks_wlan_private *priv = netdev_priv(dev);
|
|
struct iw_scan_req *req = NULL;
|
|
+ int len;
|
|
|
|
if (priv->sleep_mode == SLP_SLEEP)
|
|
return -EPERM;
|
|
@@ -1129,8 +1130,9 @@ static int ks_wlan_set_scan(struct net_device *dev,
|
|
if (wrqu->data.length == sizeof(struct iw_scan_req) &&
|
|
wrqu->data.flags & IW_SCAN_THIS_ESSID) {
|
|
req = (struct iw_scan_req *)extra;
|
|
- priv->scan_ssid_len = req->essid_len;
|
|
- memcpy(priv->scan_ssid, req->essid, priv->scan_ssid_len);
|
|
+ len = min_t(int, req->essid_len, IW_ESSID_MAX_SIZE);
|
|
+ priv->scan_ssid_len = len;
|
|
+ memcpy(priv->scan_ssid, req->essid, len);
|
|
} else {
|
|
priv->scan_ssid_len = 0;
|
|
}
|
|
diff --git a/drivers/staging/rtl8188eu/core/rtw_ap.c b/drivers/staging/rtl8188eu/core/rtw_ap.c
|
|
index fa1e34a0d4561..182bb944c9b3b 100644
|
|
--- a/drivers/staging/rtl8188eu/core/rtw_ap.c
|
|
+++ b/drivers/staging/rtl8188eu/core/rtw_ap.c
|
|
@@ -791,6 +791,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
|
|
p = rtw_get_ie(ie + _BEACON_IE_OFFSET_, WLAN_EID_SSID, &ie_len,
|
|
pbss_network->ie_length - _BEACON_IE_OFFSET_);
|
|
if (p && ie_len > 0) {
|
|
+ ie_len = min_t(int, ie_len, sizeof(pbss_network->ssid.ssid));
|
|
memset(&pbss_network->ssid, 0, sizeof(struct ndis_802_11_ssid));
|
|
memcpy(pbss_network->ssid.ssid, p + 2, ie_len);
|
|
pbss_network->ssid.ssid_length = ie_len;
|
|
@@ -811,6 +812,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
|
|
p = rtw_get_ie(ie + _BEACON_IE_OFFSET_, WLAN_EID_SUPP_RATES, &ie_len,
|
|
pbss_network->ie_length - _BEACON_IE_OFFSET_);
|
|
if (p) {
|
|
+ ie_len = min_t(int, ie_len, NDIS_802_11_LENGTH_RATES_EX);
|
|
memcpy(supportRate, p + 2, ie_len);
|
|
supportRateNum = ie_len;
|
|
}
|
|
@@ -819,6 +821,8 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
|
|
p = rtw_get_ie(ie + _BEACON_IE_OFFSET_, WLAN_EID_EXT_SUPP_RATES,
|
|
&ie_len, pbss_network->ie_length - _BEACON_IE_OFFSET_);
|
|
if (p) {
|
|
+ ie_len = min_t(int, ie_len,
|
|
+ NDIS_802_11_LENGTH_RATES_EX - supportRateNum);
|
|
memcpy(supportRate + supportRateNum, p + 2, ie_len);
|
|
supportRateNum += ie_len;
|
|
}
|
|
@@ -934,6 +938,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
|
|
|
|
pht_cap->mcs.rx_mask[0] = 0xff;
|
|
pht_cap->mcs.rx_mask[1] = 0x0;
|
|
+ ie_len = min_t(int, ie_len, sizeof(pmlmepriv->htpriv.ht_cap));
|
|
memcpy(&pmlmepriv->htpriv.ht_cap, p + 2, ie_len);
|
|
}
|
|
|
|
diff --git a/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c b/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
|
|
index 6f42f13a71fa7..f92fcb623a2cc 100644
|
|
--- a/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
|
|
+++ b/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
|
|
@@ -1133,9 +1133,11 @@ static int rtw_wx_set_scan(struct net_device *dev, struct iw_request_info *a,
|
|
break;
|
|
}
|
|
sec_len = *(pos++); len -= 1;
|
|
- if (sec_len > 0 && sec_len <= len) {
|
|
+ if (sec_len > 0 &&
|
|
+ sec_len <= len &&
|
|
+ sec_len <= 32) {
|
|
ssid[ssid_index].ssid_length = sec_len;
|
|
- memcpy(ssid[ssid_index].ssid, pos, ssid[ssid_index].ssid_length);
|
|
+ memcpy(ssid[ssid_index].ssid, pos, sec_len);
|
|
ssid_index++;
|
|
}
|
|
pos += sec_len;
|
|
diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_wx.c b/drivers/staging/rtl8192e/rtl8192e/rtl_wx.c
|
|
index 16bcee13f64b5..407effde5e71a 100644
|
|
--- a/drivers/staging/rtl8192e/rtl8192e/rtl_wx.c
|
|
+++ b/drivers/staging/rtl8192e/rtl8192e/rtl_wx.c
|
|
@@ -406,9 +406,10 @@ static int _rtl92e_wx_set_scan(struct net_device *dev,
|
|
struct iw_scan_req *req = (struct iw_scan_req *)b;
|
|
|
|
if (req->essid_len) {
|
|
- ieee->current_network.ssid_len = req->essid_len;
|
|
- memcpy(ieee->current_network.ssid, req->essid,
|
|
- req->essid_len);
|
|
+ int len = min_t(int, req->essid_len, IW_ESSID_MAX_SIZE);
|
|
+
|
|
+ ieee->current_network.ssid_len = len;
|
|
+ memcpy(ieee->current_network.ssid, req->essid, len);
|
|
}
|
|
}
|
|
|
|
diff --git a/drivers/staging/rtl8192u/r8192U_wx.c b/drivers/staging/rtl8192u/r8192U_wx.c
|
|
index d853586705fc9..77bf88696a844 100644
|
|
--- a/drivers/staging/rtl8192u/r8192U_wx.c
|
|
+++ b/drivers/staging/rtl8192u/r8192U_wx.c
|
|
@@ -331,8 +331,10 @@ static int r8192_wx_set_scan(struct net_device *dev, struct iw_request_info *a,
|
|
struct iw_scan_req *req = (struct iw_scan_req *)b;
|
|
|
|
if (req->essid_len) {
|
|
- ieee->current_network.ssid_len = req->essid_len;
|
|
- memcpy(ieee->current_network.ssid, req->essid, req->essid_len);
|
|
+ int len = min_t(int, req->essid_len, IW_ESSID_MAX_SIZE);
|
|
+
|
|
+ ieee->current_network.ssid_len = len;
|
|
+ memcpy(ieee->current_network.ssid, req->essid, len);
|
|
}
|
|
}
|
|
|
|
diff --git a/drivers/staging/rtl8712/rtl871x_cmd.c b/drivers/staging/rtl8712/rtl871x_cmd.c
|
|
index 18116469bd316..75716f59044d9 100644
|
|
--- a/drivers/staging/rtl8712/rtl871x_cmd.c
|
|
+++ b/drivers/staging/rtl8712/rtl871x_cmd.c
|
|
@@ -192,8 +192,10 @@ u8 r8712_sitesurvey_cmd(struct _adapter *padapter,
|
|
psurveyPara->ss_ssidlen = 0;
|
|
memset(psurveyPara->ss_ssid, 0, IW_ESSID_MAX_SIZE + 1);
|
|
if (pssid && pssid->SsidLength) {
|
|
- memcpy(psurveyPara->ss_ssid, pssid->Ssid, pssid->SsidLength);
|
|
- psurveyPara->ss_ssidlen = cpu_to_le32(pssid->SsidLength);
|
|
+ int len = min_t(int, pssid->SsidLength, IW_ESSID_MAX_SIZE);
|
|
+
|
|
+ memcpy(psurveyPara->ss_ssid, pssid->Ssid, len);
|
|
+ psurveyPara->ss_ssidlen = cpu_to_le32(len);
|
|
}
|
|
set_fwstate(pmlmepriv, _FW_UNDER_SURVEY);
|
|
r8712_enqueue_cmd(pcmdpriv, ph2c);
|
|
diff --git a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
|
|
index cbaa7a4897483..2a661b04cd255 100644
|
|
--- a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
|
|
+++ b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
|
|
@@ -924,7 +924,7 @@ static int r871x_wx_set_priv(struct net_device *dev,
|
|
struct iw_point *dwrq = (struct iw_point *)awrq;
|
|
|
|
len = dwrq->length;
|
|
- ext = memdup_user(dwrq->pointer, len);
|
|
+ ext = strndup_user(dwrq->pointer, len);
|
|
if (IS_ERR(ext))
|
|
return PTR_ERR(ext);
|
|
|
|
diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
|
|
index 14db5e568f22b..d4cc43afe05b8 100644
|
|
--- a/drivers/target/target_core_pr.c
|
|
+++ b/drivers/target/target_core_pr.c
|
|
@@ -3739,6 +3739,7 @@ core_scsi3_pri_read_keys(struct se_cmd *cmd)
|
|
spin_unlock(&dev->t10_pr.registration_lock);
|
|
|
|
put_unaligned_be32(add_len, &buf[4]);
|
|
+ target_set_cmd_data_length(cmd, 8 + add_len);
|
|
|
|
transport_kunmap_data_sg(cmd);
|
|
|
|
@@ -3757,7 +3758,7 @@ core_scsi3_pri_read_reservation(struct se_cmd *cmd)
|
|
struct t10_pr_registration *pr_reg;
|
|
unsigned char *buf;
|
|
u64 pr_res_key;
|
|
- u32 add_len = 16; /* Hardcoded to 16 when a reservation is held. */
|
|
+ u32 add_len = 0;
|
|
|
|
if (cmd->data_length < 8) {
|
|
pr_err("PRIN SA READ_RESERVATIONS SCSI Data Length: %u"
|
|
@@ -3775,8 +3776,9 @@ core_scsi3_pri_read_reservation(struct se_cmd *cmd)
|
|
pr_reg = dev->dev_pr_res_holder;
|
|
if (pr_reg) {
|
|
/*
|
|
- * Set the hardcoded Additional Length
|
|
+ * Set the Additional Length to 16 when a reservation is held
|
|
*/
|
|
+ add_len = 16;
|
|
put_unaligned_be32(add_len, &buf[4]);
|
|
|
|
if (cmd->data_length < 22)
|
|
@@ -3812,6 +3814,8 @@ core_scsi3_pri_read_reservation(struct se_cmd *cmd)
|
|
(pr_reg->pr_res_type & 0x0f);
|
|
}
|
|
|
|
+ target_set_cmd_data_length(cmd, 8 + add_len);
|
|
+
|
|
err:
|
|
spin_unlock(&dev->dev_reservation_lock);
|
|
transport_kunmap_data_sg(cmd);
|
|
@@ -3830,7 +3834,7 @@ core_scsi3_pri_report_capabilities(struct se_cmd *cmd)
|
|
struct se_device *dev = cmd->se_dev;
|
|
struct t10_reservation *pr_tmpl = &dev->t10_pr;
|
|
unsigned char *buf;
|
|
- u16 add_len = 8; /* Hardcoded to 8. */
|
|
+ u16 len = 8; /* Hardcoded to 8. */
|
|
|
|
if (cmd->data_length < 6) {
|
|
pr_err("PRIN SA REPORT_CAPABILITIES SCSI Data Length:"
|
|
@@ -3842,7 +3846,7 @@ core_scsi3_pri_report_capabilities(struct se_cmd *cmd)
|
|
if (!buf)
|
|
return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
|
|
|
|
- put_unaligned_be16(add_len, &buf[0]);
|
|
+ put_unaligned_be16(len, &buf[0]);
|
|
buf[2] |= 0x10; /* CRH: Compatible Reservation Hanlding bit. */
|
|
buf[2] |= 0x08; /* SIP_C: Specify Initiator Ports Capable bit */
|
|
buf[2] |= 0x04; /* ATP_C: All Target Ports Capable bit */
|
|
@@ -3871,6 +3875,8 @@ core_scsi3_pri_report_capabilities(struct se_cmd *cmd)
|
|
buf[4] |= 0x02; /* PR_TYPE_WRITE_EXCLUSIVE */
|
|
buf[5] |= 0x01; /* PR_TYPE_EXCLUSIVE_ACCESS_ALLREG */
|
|
|
|
+ target_set_cmd_data_length(cmd, len);
|
|
+
|
|
transport_kunmap_data_sg(cmd);
|
|
|
|
return 0;
|
|
@@ -4031,6 +4037,7 @@ core_scsi3_pri_read_full_status(struct se_cmd *cmd)
|
|
* Set ADDITIONAL_LENGTH
|
|
*/
|
|
put_unaligned_be32(add_len, &buf[4]);
|
|
+ target_set_cmd_data_length(cmd, 8 + add_len);
|
|
|
|
transport_kunmap_data_sg(cmd);
|
|
|
|
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
|
|
index fca4bd079d02c..8a4d58fdc9fe2 100644
|
|
--- a/drivers/target/target_core_transport.c
|
|
+++ b/drivers/target/target_core_transport.c
|
|
@@ -879,11 +879,9 @@ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status)
|
|
}
|
|
EXPORT_SYMBOL(target_complete_cmd);
|
|
|
|
-void target_complete_cmd_with_length(struct se_cmd *cmd, u8 scsi_status, int length)
|
|
+void target_set_cmd_data_length(struct se_cmd *cmd, int length)
|
|
{
|
|
- if ((scsi_status == SAM_STAT_GOOD ||
|
|
- cmd->se_cmd_flags & SCF_TREAT_READ_AS_NORMAL) &&
|
|
- length < cmd->data_length) {
|
|
+ if (length < cmd->data_length) {
|
|
if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT) {
|
|
cmd->residual_count += cmd->data_length - length;
|
|
} else {
|
|
@@ -893,6 +891,15 @@ void target_complete_cmd_with_length(struct se_cmd *cmd, u8 scsi_status, int len
|
|
|
|
cmd->data_length = length;
|
|
}
|
|
+}
|
|
+EXPORT_SYMBOL(target_set_cmd_data_length);
|
|
+
|
|
+void target_complete_cmd_with_length(struct se_cmd *cmd, u8 scsi_status, int length)
|
|
+{
|
|
+ if (scsi_status == SAM_STAT_GOOD ||
|
|
+ cmd->se_cmd_flags & SCF_TREAT_READ_AS_NORMAL) {
|
|
+ target_set_cmd_data_length(cmd, length);
|
|
+ }
|
|
|
|
target_complete_cmd(cmd, scsi_status);
|
|
}
|
|
diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
|
|
index 9795b2e8b0b2c..1b61d26bb7afe 100644
|
|
--- a/drivers/tty/serial/max310x.c
|
|
+++ b/drivers/tty/serial/max310x.c
|
|
@@ -1056,9 +1056,9 @@ static int max310x_startup(struct uart_port *port)
|
|
max310x_port_update(port, MAX310X_MODE1_REG,
|
|
MAX310X_MODE1_TRNSCVCTRL_BIT, 0);
|
|
|
|
- /* Reset FIFOs */
|
|
- max310x_port_write(port, MAX310X_MODE2_REG,
|
|
- MAX310X_MODE2_FIFORST_BIT);
|
|
+ /* Configure MODE2 register & Reset FIFOs*/
|
|
+ val = MAX310X_MODE2_RXEMPTINV_BIT | MAX310X_MODE2_FIFORST_BIT;
|
|
+ max310x_port_write(port, MAX310X_MODE2_REG, val);
|
|
max310x_port_update(port, MAX310X_MODE2_REG,
|
|
MAX310X_MODE2_FIFORST_BIT, 0);
|
|
|
|
@@ -1086,27 +1086,8 @@ static int max310x_startup(struct uart_port *port)
|
|
/* Clear IRQ status register */
|
|
max310x_port_read(port, MAX310X_IRQSTS_REG);
|
|
|
|
- /*
|
|
- * Let's ask for an interrupt after a timeout equivalent to
|
|
- * the receiving time of 4 characters after the last character
|
|
- * has been received.
|
|
- */
|
|
- max310x_port_write(port, MAX310X_RXTO_REG, 4);
|
|
-
|
|
- /*
|
|
- * Make sure we also get RX interrupts when the RX FIFO is
|
|
- * filling up quickly, so get an interrupt when half of the RX
|
|
- * FIFO has been filled in.
|
|
- */
|
|
- max310x_port_write(port, MAX310X_FIFOTRIGLVL_REG,
|
|
- MAX310X_FIFOTRIGLVL_RX(MAX310X_FIFO_SIZE / 2));
|
|
-
|
|
- /* Enable RX timeout interrupt in LSR */
|
|
- max310x_port_write(port, MAX310X_LSR_IRQEN_REG,
|
|
- MAX310X_LSR_RXTO_BIT);
|
|
-
|
|
- /* Enable LSR, RX FIFO trigger, CTS change interrupts */
|
|
- val = MAX310X_IRQ_LSR_BIT | MAX310X_IRQ_RXFIFO_BIT | MAX310X_IRQ_TXEMPTY_BIT;
|
|
+ /* Enable RX, TX, CTS change interrupts */
|
|
+ val = MAX310X_IRQ_RXEMPTY_BIT | MAX310X_IRQ_TXEMPTY_BIT;
|
|
max310x_port_write(port, MAX310X_IRQEN_REG, val | MAX310X_IRQ_CTS_BIT);
|
|
|
|
return 0;
|
|
diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
|
|
index 781905745812e..2f4e5174e78c8 100644
|
|
--- a/drivers/usb/class/cdc-acm.c
|
|
+++ b/drivers/usb/class/cdc-acm.c
|
|
@@ -1929,6 +1929,11 @@ static const struct usb_device_id acm_ids[] = {
|
|
.driver_info = SEND_ZERO_PACKET,
|
|
},
|
|
|
|
+ /* Exclude Goodix Fingerprint Reader */
|
|
+ { USB_DEVICE(0x27c6, 0x5395),
|
|
+ .driver_info = IGNORE_DEVICE,
|
|
+ },
|
|
+
|
|
/* control interfaces without any protocol set */
|
|
{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,
|
|
USB_CDC_PROTO_NONE) },
|
|
diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
|
|
index c9f6e97582885..f27b4aecff3d4 100644
|
|
--- a/drivers/usb/class/usblp.c
|
|
+++ b/drivers/usb/class/usblp.c
|
|
@@ -494,16 +494,24 @@ static int usblp_release(struct inode *inode, struct file *file)
|
|
/* No kernel lock - fine */
|
|
static __poll_t usblp_poll(struct file *file, struct poll_table_struct *wait)
|
|
{
|
|
- __poll_t ret;
|
|
+ struct usblp *usblp = file->private_data;
|
|
+ __poll_t ret = 0;
|
|
unsigned long flags;
|
|
|
|
- struct usblp *usblp = file->private_data;
|
|
/* Should we check file->f_mode & FMODE_WRITE before poll_wait()? */
|
|
poll_wait(file, &usblp->rwait, wait);
|
|
poll_wait(file, &usblp->wwait, wait);
|
|
+
|
|
+ mutex_lock(&usblp->mut);
|
|
+ if (!usblp->present)
|
|
+ ret |= EPOLLHUP;
|
|
+ mutex_unlock(&usblp->mut);
|
|
+
|
|
spin_lock_irqsave(&usblp->lock, flags);
|
|
- ret = ((usblp->bidir && usblp->rcomplete) ? EPOLLIN | EPOLLRDNORM : 0) |
|
|
- ((usblp->no_paper || usblp->wcomplete) ? EPOLLOUT | EPOLLWRNORM : 0);
|
|
+ if (usblp->bidir && usblp->rcomplete)
|
|
+ ret |= EPOLLIN | EPOLLRDNORM;
|
|
+ if (usblp->no_paper || usblp->wcomplete)
|
|
+ ret |= EPOLLOUT | EPOLLWRNORM;
|
|
spin_unlock_irqrestore(&usblp->lock, flags);
|
|
return ret;
|
|
}
|
|
diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c
|
|
index 8f07b05161009..a566bb494e246 100644
|
|
--- a/drivers/usb/core/usb.c
|
|
+++ b/drivers/usb/core/usb.c
|
|
@@ -748,6 +748,38 @@ void usb_put_intf(struct usb_interface *intf)
|
|
}
|
|
EXPORT_SYMBOL_GPL(usb_put_intf);
|
|
|
|
+/**
|
|
+ * usb_intf_get_dma_device - acquire a reference on the usb interface's DMA endpoint
|
|
+ * @intf: the usb interface
|
|
+ *
|
|
+ * While a USB device cannot perform DMA operations by itself, many USB
|
|
+ * controllers can. A call to usb_intf_get_dma_device() returns the DMA endpoint
|
|
+ * for the given USB interface, if any. The returned device structure must be
|
|
+ * released with put_device().
|
|
+ *
|
|
+ * See also usb_get_dma_device().
|
|
+ *
|
|
+ * Returns: A reference to the usb interface's DMA endpoint; or NULL if none
|
|
+ * exists.
|
|
+ */
|
|
+struct device *usb_intf_get_dma_device(struct usb_interface *intf)
|
|
+{
|
|
+ struct usb_device *udev = interface_to_usbdev(intf);
|
|
+ struct device *dmadev;
|
|
+
|
|
+ if (!udev->bus)
|
|
+ return NULL;
|
|
+
|
|
+ dmadev = get_device(udev->bus->sysdev);
|
|
+ if (!dmadev || !dmadev->dma_mask) {
|
|
+ put_device(dmadev);
|
|
+ return NULL;
|
|
+ }
|
|
+
|
|
+ return dmadev;
|
|
+}
|
|
+EXPORT_SYMBOL_GPL(usb_intf_get_dma_device);
|
|
+
|
|
/* USB device locking
|
|
*
|
|
* USB devices and interfaces are locked using the semaphore in their
|
|
diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
|
|
index c703d552bbcfc..c00c4fa139b88 100644
|
|
--- a/drivers/usb/dwc3/dwc3-qcom.c
|
|
+++ b/drivers/usb/dwc3/dwc3-qcom.c
|
|
@@ -60,12 +60,14 @@ struct dwc3_acpi_pdata {
|
|
int dp_hs_phy_irq_index;
|
|
int dm_hs_phy_irq_index;
|
|
int ss_phy_irq_index;
|
|
+ bool is_urs;
|
|
};
|
|
|
|
struct dwc3_qcom {
|
|
struct device *dev;
|
|
void __iomem *qscratch_base;
|
|
struct platform_device *dwc3;
|
|
+ struct platform_device *urs_usb;
|
|
struct clk **clks;
|
|
int num_clocks;
|
|
struct reset_control *resets;
|
|
@@ -356,8 +358,10 @@ static int dwc3_qcom_suspend(struct dwc3_qcom *qcom)
|
|
if (ret)
|
|
dev_warn(qcom->dev, "failed to disable interconnect: %d\n", ret);
|
|
|
|
+ if (device_may_wakeup(qcom->dev))
|
|
+ dwc3_qcom_enable_interrupts(qcom);
|
|
+
|
|
qcom->is_suspended = true;
|
|
- dwc3_qcom_enable_interrupts(qcom);
|
|
|
|
return 0;
|
|
}
|
|
@@ -370,7 +374,8 @@ static int dwc3_qcom_resume(struct dwc3_qcom *qcom)
|
|
if (!qcom->is_suspended)
|
|
return 0;
|
|
|
|
- dwc3_qcom_disable_interrupts(qcom);
|
|
+ if (device_may_wakeup(qcom->dev))
|
|
+ dwc3_qcom_disable_interrupts(qcom);
|
|
|
|
for (i = 0; i < qcom->num_clocks; i++) {
|
|
ret = clk_prepare_enable(qcom->clks[i]);
|
|
@@ -429,13 +434,15 @@ static void dwc3_qcom_select_utmi_clk(struct dwc3_qcom *qcom)
|
|
static int dwc3_qcom_get_irq(struct platform_device *pdev,
|
|
const char *name, int num)
|
|
{
|
|
+ struct dwc3_qcom *qcom = platform_get_drvdata(pdev);
|
|
+ struct platform_device *pdev_irq = qcom->urs_usb ? qcom->urs_usb : pdev;
|
|
struct device_node *np = pdev->dev.of_node;
|
|
int ret;
|
|
|
|
if (np)
|
|
- ret = platform_get_irq_byname(pdev, name);
|
|
+ ret = platform_get_irq_byname(pdev_irq, name);
|
|
else
|
|
- ret = platform_get_irq(pdev, num);
|
|
+ ret = platform_get_irq(pdev_irq, num);
|
|
|
|
return ret;
|
|
}
|
|
@@ -568,6 +575,8 @@ static int dwc3_qcom_acpi_register_core(struct platform_device *pdev)
|
|
struct dwc3_qcom *qcom = platform_get_drvdata(pdev);
|
|
struct device *dev = &pdev->dev;
|
|
struct resource *res, *child_res = NULL;
|
|
+ struct platform_device *pdev_irq = qcom->urs_usb ? qcom->urs_usb :
|
|
+ pdev;
|
|
int irq;
|
|
int ret;
|
|
|
|
@@ -597,7 +606,7 @@ static int dwc3_qcom_acpi_register_core(struct platform_device *pdev)
|
|
child_res[0].end = child_res[0].start +
|
|
qcom->acpi_pdata->dwc3_core_base_size;
|
|
|
|
- irq = platform_get_irq(pdev, 0);
|
|
+ irq = platform_get_irq(pdev_irq, 0);
|
|
child_res[1].flags = IORESOURCE_IRQ;
|
|
child_res[1].start = child_res[1].end = irq;
|
|
|
|
@@ -639,16 +648,46 @@ static int dwc3_qcom_of_register_core(struct platform_device *pdev)
|
|
ret = of_platform_populate(np, NULL, NULL, dev);
|
|
if (ret) {
|
|
dev_err(dev, "failed to register dwc3 core - %d\n", ret);
|
|
- return ret;
|
|
+ goto node_put;
|
|
}
|
|
|
|
qcom->dwc3 = of_find_device_by_node(dwc3_np);
|
|
if (!qcom->dwc3) {
|
|
+ ret = -ENODEV;
|
|
dev_err(dev, "failed to get dwc3 platform device\n");
|
|
- return -ENODEV;
|
|
}
|
|
|
|
- return 0;
|
|
+node_put:
|
|
+ of_node_put(dwc3_np);
|
|
+
|
|
+ return ret;
|
|
+}
|
|
+
|
|
+static struct platform_device *
|
|
+dwc3_qcom_create_urs_usb_platdev(struct device *dev)
|
|
+{
|
|
+ struct fwnode_handle *fwh;
|
|
+ struct acpi_device *adev;
|
|
+ char name[8];
|
|
+ int ret;
|
|
+ int id;
|
|
+
|
|
+ /* Figure out device id */
|
|
+ ret = sscanf(fwnode_get_name(dev->fwnode), "URS%d", &id);
|
|
+ if (!ret)
|
|
+ return NULL;
|
|
+
|
|
+ /* Find the child using name */
|
|
+ snprintf(name, sizeof(name), "USB%d", id);
|
|
+ fwh = fwnode_get_named_child_node(dev->fwnode, name);
|
|
+ if (!fwh)
|
|
+ return NULL;
|
|
+
|
|
+ adev = to_acpi_device_node(fwh);
|
|
+ if (!adev)
|
|
+ return NULL;
|
|
+
|
|
+ return acpi_create_platform_device(adev, NULL);
|
|
}
|
|
|
|
static int dwc3_qcom_probe(struct platform_device *pdev)
|
|
@@ -715,6 +754,14 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
|
|
qcom->acpi_pdata->qscratch_base_offset;
|
|
parent_res->end = parent_res->start +
|
|
qcom->acpi_pdata->qscratch_base_size;
|
|
+
|
|
+ if (qcom->acpi_pdata->is_urs) {
|
|
+ qcom->urs_usb = dwc3_qcom_create_urs_usb_platdev(dev);
|
|
+ if (!qcom->urs_usb) {
|
|
+ dev_err(dev, "failed to create URS USB platdev\n");
|
|
+ return -ENODEV;
|
|
+ }
|
|
+ }
|
|
}
|
|
|
|
qcom->qscratch_base = devm_ioremap_resource(dev, parent_res);
|
|
@@ -877,8 +924,22 @@ static const struct dwc3_acpi_pdata sdm845_acpi_pdata = {
|
|
.ss_phy_irq_index = 2
|
|
};
|
|
|
|
+static const struct dwc3_acpi_pdata sdm845_acpi_urs_pdata = {
|
|
+ .qscratch_base_offset = SDM845_QSCRATCH_BASE_OFFSET,
|
|
+ .qscratch_base_size = SDM845_QSCRATCH_SIZE,
|
|
+ .dwc3_core_base_size = SDM845_DWC3_CORE_SIZE,
|
|
+ .hs_phy_irq_index = 1,
|
|
+ .dp_hs_phy_irq_index = 4,
|
|
+ .dm_hs_phy_irq_index = 3,
|
|
+ .ss_phy_irq_index = 2,
|
|
+ .is_urs = true,
|
|
+};
|
|
+
|
|
static const struct acpi_device_id dwc3_qcom_acpi_match[] = {
|
|
{ "QCOM2430", (unsigned long)&sdm845_acpi_pdata },
|
|
+ { "QCOM0304", (unsigned long)&sdm845_acpi_urs_pdata },
|
|
+ { "QCOM0497", (unsigned long)&sdm845_acpi_urs_pdata },
|
|
+ { "QCOM04A6", (unsigned long)&sdm845_acpi_pdata },
|
|
{ },
|
|
};
|
|
MODULE_DEVICE_TABLE(acpi, dwc3_qcom_acpi_match);
|
|
diff --git a/drivers/usb/gadget/function/f_uac1.c b/drivers/usb/gadget/function/f_uac1.c
|
|
index 00d346965f7a5..560382e0a8f38 100644
|
|
--- a/drivers/usb/gadget/function/f_uac1.c
|
|
+++ b/drivers/usb/gadget/function/f_uac1.c
|
|
@@ -499,6 +499,7 @@ static void f_audio_disable(struct usb_function *f)
|
|
uac1->as_out_alt = 0;
|
|
uac1->as_in_alt = 0;
|
|
|
|
+ u_audio_stop_playback(&uac1->g_audio);
|
|
u_audio_stop_capture(&uac1->g_audio);
|
|
}
|
|
|
|
diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
|
|
index 5d960b6603b6f..6f03e944e0e31 100644
|
|
--- a/drivers/usb/gadget/function/f_uac2.c
|
|
+++ b/drivers/usb/gadget/function/f_uac2.c
|
|
@@ -478,7 +478,7 @@ static int set_ep_max_packet_size(const struct f_uac2_opts *uac2_opts,
|
|
}
|
|
|
|
max_size_bw = num_channels(chmask) * ssize *
|
|
- DIV_ROUND_UP(srate, factor / (1 << (ep_desc->bInterval - 1)));
|
|
+ ((srate / (factor / (1 << (ep_desc->bInterval - 1)))) + 1);
|
|
ep_desc->wMaxPacketSize = cpu_to_le16(min_t(u16, max_size_bw,
|
|
max_size_ep));
|
|
|
|
diff --git a/drivers/usb/gadget/function/u_ether_configfs.h b/drivers/usb/gadget/function/u_ether_configfs.h
|
|
index bd92b57030131..f982e18a5a789 100644
|
|
--- a/drivers/usb/gadget/function/u_ether_configfs.h
|
|
+++ b/drivers/usb/gadget/function/u_ether_configfs.h
|
|
@@ -169,12 +169,11 @@ out: \
|
|
size_t len) \
|
|
{ \
|
|
struct f_##_f_##_opts *opts = to_f_##_f_##_opts(item); \
|
|
- int ret; \
|
|
+ int ret = -EINVAL; \
|
|
u8 val; \
|
|
\
|
|
mutex_lock(&opts->lock); \
|
|
- ret = sscanf(page, "%02hhx", &val); \
|
|
- if (ret > 0) { \
|
|
+ if (sscanf(page, "%02hhx", &val) > 0) { \
|
|
opts->_n_ = val; \
|
|
ret = len; \
|
|
} \
|
|
diff --git a/drivers/usb/gadget/udc/s3c2410_udc.c b/drivers/usb/gadget/udc/s3c2410_udc.c
|
|
index f1ea51476add0..1d3ebb07ccd4d 100644
|
|
--- a/drivers/usb/gadget/udc/s3c2410_udc.c
|
|
+++ b/drivers/usb/gadget/udc/s3c2410_udc.c
|
|
@@ -1773,8 +1773,8 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
|
|
udc_info = dev_get_platdata(&pdev->dev);
|
|
|
|
base_addr = devm_platform_ioremap_resource(pdev, 0);
|
|
- if (!base_addr) {
|
|
- retval = -ENOMEM;
|
|
+ if (IS_ERR(base_addr)) {
|
|
+ retval = PTR_ERR(base_addr);
|
|
goto err_mem;
|
|
}
|
|
|
|
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
|
|
index 84da8406d5b42..5bbccc9a0179f 100644
|
|
--- a/drivers/usb/host/xhci-pci.c
|
|
+++ b/drivers/usb/host/xhci-pci.c
|
|
@@ -66,6 +66,7 @@
|
|
#define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142
|
|
#define PCI_DEVICE_ID_ASMEDIA_1142_XHCI 0x1242
|
|
#define PCI_DEVICE_ID_ASMEDIA_2142_XHCI 0x2142
|
|
+#define PCI_DEVICE_ID_ASMEDIA_3242_XHCI 0x3242
|
|
|
|
static const char hcd_name[] = "xhci_hcd";
|
|
|
|
@@ -276,11 +277,14 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
|
|
pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
|
|
xhci->quirks |= XHCI_BROKEN_STREAMS;
|
|
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
|
|
- pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI)
|
|
+ pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI) {
|
|
xhci->quirks |= XHCI_TRUST_TX_LENGTH;
|
|
+ xhci->quirks |= XHCI_NO_64BIT_SUPPORT;
|
|
+ }
|
|
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
|
|
(pdev->device == PCI_DEVICE_ID_ASMEDIA_1142_XHCI ||
|
|
- pdev->device == PCI_DEVICE_ID_ASMEDIA_2142_XHCI))
|
|
+ pdev->device == PCI_DEVICE_ID_ASMEDIA_2142_XHCI ||
|
|
+ pdev->device == PCI_DEVICE_ID_ASMEDIA_3242_XHCI))
|
|
xhci->quirks |= XHCI_NO_64BIT_SUPPORT;
|
|
|
|
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
|
|
@@ -295,6 +299,11 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
|
|
pdev->device == 0x9026)
|
|
xhci->quirks |= XHCI_RESET_PLL_ON_DISCONNECT;
|
|
|
|
+ if (pdev->vendor == PCI_VENDOR_ID_AMD &&
|
|
+ (pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_2 ||
|
|
+ pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_4))
|
|
+ xhci->quirks |= XHCI_NO_SOFT_RETRY;
|
|
+
|
|
if (xhci->quirks & XHCI_RESET_ON_RESUME)
|
|
xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
|
|
"QUIRK: Resetting on resume");
|
|
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
|
|
index 89c3be9917f66..02ea65db80f34 100644
|
|
--- a/drivers/usb/host/xhci-ring.c
|
|
+++ b/drivers/usb/host/xhci-ring.c
|
|
@@ -2307,7 +2307,8 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td,
|
|
remaining = 0;
|
|
break;
|
|
case COMP_USB_TRANSACTION_ERROR:
|
|
- if ((ep_ring->err_count++ > MAX_SOFT_RETRY) ||
|
|
+ if (xhci->quirks & XHCI_NO_SOFT_RETRY ||
|
|
+ (ep_ring->err_count++ > MAX_SOFT_RETRY) ||
|
|
le32_to_cpu(slot_ctx->tt_info) & TT_SLOT)
|
|
break;
|
|
*status = 0;
|
|
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
|
|
index 345a221028c6f..fd84ca7534e0d 100644
|
|
--- a/drivers/usb/host/xhci.c
|
|
+++ b/drivers/usb/host/xhci.c
|
|
@@ -883,44 +883,42 @@ static void xhci_clear_command_ring(struct xhci_hcd *xhci)
|
|
xhci_set_cmd_ring_deq(xhci);
|
|
}
|
|
|
|
-static void xhci_disable_port_wake_on_bits(struct xhci_hcd *xhci)
|
|
+/*
|
|
+ * Disable port wake bits if do_wakeup is not set.
|
|
+ *
|
|
+ * Also clear a possible internal port wake state left hanging for ports that
|
|
+ * detected termination but never successfully enumerated (trained to 0U).
|
|
+ * Internal wake causes immediate xHCI wake after suspend. PORT_CSC write done
|
|
+ * at enumeration clears this wake, force one here as well for unconnected ports
|
|
+ */
|
|
+
|
|
+static void xhci_disable_hub_port_wake(struct xhci_hcd *xhci,
|
|
+ struct xhci_hub *rhub,
|
|
+ bool do_wakeup)
|
|
{
|
|
- struct xhci_port **ports;
|
|
- int port_index;
|
|
unsigned long flags;
|
|
u32 t1, t2, portsc;
|
|
+ int i;
|
|
|
|
spin_lock_irqsave(&xhci->lock, flags);
|
|
|
|
- /* disable usb3 ports Wake bits */
|
|
- port_index = xhci->usb3_rhub.num_ports;
|
|
- ports = xhci->usb3_rhub.ports;
|
|
- while (port_index--) {
|
|
- t1 = readl(ports[port_index]->addr);
|
|
- portsc = t1;
|
|
- t1 = xhci_port_state_to_neutral(t1);
|
|
- t2 = t1 & ~PORT_WAKE_BITS;
|
|
- if (t1 != t2) {
|
|
- writel(t2, ports[port_index]->addr);
|
|
- xhci_dbg(xhci, "disable wake bits port %d-%d, portsc: 0x%x, write: 0x%x\n",
|
|
- xhci->usb3_rhub.hcd->self.busnum,
|
|
- port_index + 1, portsc, t2);
|
|
- }
|
|
- }
|
|
+ for (i = 0; i < rhub->num_ports; i++) {
|
|
+ portsc = readl(rhub->ports[i]->addr);
|
|
+ t1 = xhci_port_state_to_neutral(portsc);
|
|
+ t2 = t1;
|
|
+
|
|
+ /* clear wake bits if do_wake is not set */
|
|
+ if (!do_wakeup)
|
|
+ t2 &= ~PORT_WAKE_BITS;
|
|
+
|
|
+ /* Don't touch csc bit if connected or connect change is set */
|
|
+ if (!(portsc & (PORT_CSC | PORT_CONNECT)))
|
|
+ t2 |= PORT_CSC;
|
|
|
|
- /* disable usb2 ports Wake bits */
|
|
- port_index = xhci->usb2_rhub.num_ports;
|
|
- ports = xhci->usb2_rhub.ports;
|
|
- while (port_index--) {
|
|
- t1 = readl(ports[port_index]->addr);
|
|
- portsc = t1;
|
|
- t1 = xhci_port_state_to_neutral(t1);
|
|
- t2 = t1 & ~PORT_WAKE_BITS;
|
|
if (t1 != t2) {
|
|
- writel(t2, ports[port_index]->addr);
|
|
- xhci_dbg(xhci, "disable wake bits port %d-%d, portsc: 0x%x, write: 0x%x\n",
|
|
- xhci->usb2_rhub.hcd->self.busnum,
|
|
- port_index + 1, portsc, t2);
|
|
+ writel(t2, rhub->ports[i]->addr);
|
|
+ xhci_dbg(xhci, "config port %d-%d wake bits, portsc: 0x%x, write: 0x%x\n",
|
|
+ rhub->hcd->self.busnum, i + 1, portsc, t2);
|
|
}
|
|
}
|
|
spin_unlock_irqrestore(&xhci->lock, flags);
|
|
@@ -983,8 +981,8 @@ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup)
|
|
return -EINVAL;
|
|
|
|
/* Clear root port wake on bits if wakeup not allowed. */
|
|
- if (!do_wakeup)
|
|
- xhci_disable_port_wake_on_bits(xhci);
|
|
+ xhci_disable_hub_port_wake(xhci, &xhci->usb3_rhub, do_wakeup);
|
|
+ xhci_disable_hub_port_wake(xhci, &xhci->usb2_rhub, do_wakeup);
|
|
|
|
if (!HCD_HW_ACCESSIBLE(hcd))
|
|
return 0;
|
|
@@ -1088,6 +1086,7 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
|
|
struct usb_hcd *secondary_hcd;
|
|
int retval = 0;
|
|
bool comp_timer_running = false;
|
|
+ bool pending_portevent = false;
|
|
|
|
if (!hcd->state)
|
|
return 0;
|
|
@@ -1226,13 +1225,22 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
|
|
|
|
done:
|
|
if (retval == 0) {
|
|
- /* Resume root hubs only when have pending events. */
|
|
- if (xhci_pending_portevent(xhci)) {
|
|
+ /*
|
|
+ * Resume roothubs only if there are pending events.
|
|
+ * USB 3 devices resend U3 LFPS wake after a 100ms delay if
|
|
+ * the first wake signalling failed, give it that chance.
|
|
+ */
|
|
+ pending_portevent = xhci_pending_portevent(xhci);
|
|
+ if (!pending_portevent) {
|
|
+ msleep(120);
|
|
+ pending_portevent = xhci_pending_portevent(xhci);
|
|
+ }
|
|
+
|
|
+ if (pending_portevent) {
|
|
usb_hcd_resume_root_hub(xhci->shared_hcd);
|
|
usb_hcd_resume_root_hub(hcd);
|
|
}
|
|
}
|
|
-
|
|
/*
|
|
* If system is subject to the Quirk, Compliance Mode Timer needs to
|
|
* be re-initialized Always after a system resume. Ports are subject
|
|
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
|
|
index 07ff95016f119..3190fd570c579 100644
|
|
--- a/drivers/usb/host/xhci.h
|
|
+++ b/drivers/usb/host/xhci.h
|
|
@@ -1883,6 +1883,7 @@ struct xhci_hcd {
|
|
#define XHCI_SKIP_PHY_INIT BIT_ULL(37)
|
|
#define XHCI_DISABLE_SPARSE BIT_ULL(38)
|
|
#define XHCI_SG_TRB_CACHE_SIZE_QUIRK BIT_ULL(39)
|
|
+#define XHCI_NO_SOFT_RETRY BIT_ULL(40)
|
|
|
|
unsigned int num_active_eps;
|
|
unsigned int limit_active_eps;
|
|
diff --git a/drivers/usb/renesas_usbhs/pipe.c b/drivers/usb/renesas_usbhs/pipe.c
|
|
index e7334b7fb3a62..75fff2e4cbc65 100644
|
|
--- a/drivers/usb/renesas_usbhs/pipe.c
|
|
+++ b/drivers/usb/renesas_usbhs/pipe.c
|
|
@@ -746,6 +746,8 @@ struct usbhs_pipe *usbhs_pipe_malloc(struct usbhs_priv *priv,
|
|
|
|
void usbhs_pipe_free(struct usbhs_pipe *pipe)
|
|
{
|
|
+ usbhsp_pipe_select(pipe);
|
|
+ usbhsp_pipe_cfg_set(pipe, 0xFFFF, 0);
|
|
usbhsp_put_pipe(pipe);
|
|
}
|
|
|
|
diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
|
|
index 28deaaec581f6..f26861246f653 100644
|
|
--- a/drivers/usb/serial/ch341.c
|
|
+++ b/drivers/usb/serial/ch341.c
|
|
@@ -86,6 +86,7 @@ static const struct usb_device_id id_table[] = {
|
|
{ USB_DEVICE(0x1a86, 0x7522) },
|
|
{ USB_DEVICE(0x1a86, 0x7523) },
|
|
{ USB_DEVICE(0x4348, 0x5523) },
|
|
+ { USB_DEVICE(0x9986, 0x7523) },
|
|
{ },
|
|
};
|
|
MODULE_DEVICE_TABLE(usb, id_table);
|
|
diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
|
|
index 7bec1e730b209..6947d5f4cb5e9 100644
|
|
--- a/drivers/usb/serial/cp210x.c
|
|
+++ b/drivers/usb/serial/cp210x.c
|
|
@@ -146,6 +146,7 @@ static const struct usb_device_id id_table[] = {
|
|
{ USB_DEVICE(0x10C4, 0x8857) }, /* CEL EM357 ZigBee USB Stick */
|
|
{ USB_DEVICE(0x10C4, 0x88A4) }, /* MMB Networks ZigBee USB Device */
|
|
{ USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */
|
|
+ { USB_DEVICE(0x10C4, 0x88D8) }, /* Acuity Brands nLight Air Adapter */
|
|
{ USB_DEVICE(0x10C4, 0x88FB) }, /* CESINEL MEDCAL STII Network Analyzer */
|
|
{ USB_DEVICE(0x10C4, 0x8938) }, /* CESINEL MEDCAL S II Network Analyzer */
|
|
{ USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */
|
|
@@ -202,6 +203,8 @@ static const struct usb_device_id id_table[] = {
|
|
{ USB_DEVICE(0x1901, 0x0194) }, /* GE Healthcare Remote Alarm Box */
|
|
{ USB_DEVICE(0x1901, 0x0195) }, /* GE B850/B650/B450 CP2104 DP UART interface */
|
|
{ USB_DEVICE(0x1901, 0x0196) }, /* GE B850 CP2105 DP UART interface */
|
|
+ { USB_DEVICE(0x1901, 0x0197) }, /* GE CS1000 Display serial interface */
|
|
+ { USB_DEVICE(0x1901, 0x0198) }, /* GE CS1000 M.2 Key E serial interface */
|
|
{ USB_DEVICE(0x199B, 0xBA30) }, /* LORD WSDA-200-USB */
|
|
{ USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */
|
|
{ USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
|
|
diff --git a/drivers/usb/serial/io_edgeport.c b/drivers/usb/serial/io_edgeport.c
|
|
index ba5d8df695189..4b48ef4adbeb6 100644
|
|
--- a/drivers/usb/serial/io_edgeport.c
|
|
+++ b/drivers/usb/serial/io_edgeport.c
|
|
@@ -3003,26 +3003,32 @@ static int edge_startup(struct usb_serial *serial)
|
|
response = -ENODEV;
|
|
}
|
|
|
|
- usb_free_urb(edge_serial->interrupt_read_urb);
|
|
- kfree(edge_serial->interrupt_in_buffer);
|
|
-
|
|
- usb_free_urb(edge_serial->read_urb);
|
|
- kfree(edge_serial->bulk_in_buffer);
|
|
-
|
|
- kfree(edge_serial);
|
|
-
|
|
- return response;
|
|
+ goto error;
|
|
}
|
|
|
|
/* start interrupt read for this edgeport this interrupt will
|
|
* continue as long as the edgeport is connected */
|
|
response = usb_submit_urb(edge_serial->interrupt_read_urb,
|
|
GFP_KERNEL);
|
|
- if (response)
|
|
+ if (response) {
|
|
dev_err(ddev, "%s - Error %d submitting control urb\n",
|
|
__func__, response);
|
|
+
|
|
+ goto error;
|
|
+ }
|
|
}
|
|
return response;
|
|
+
|
|
+error:
|
|
+ usb_free_urb(edge_serial->interrupt_read_urb);
|
|
+ kfree(edge_serial->interrupt_in_buffer);
|
|
+
|
|
+ usb_free_urb(edge_serial->read_urb);
|
|
+ kfree(edge_serial->bulk_in_buffer);
|
|
+
|
|
+ kfree(edge_serial);
|
|
+
|
|
+ return response;
|
|
}
|
|
|
|
|
|
diff --git a/drivers/usb/usbip/stub_dev.c b/drivers/usb/usbip/stub_dev.c
|
|
index 2305d425e6c9a..8f1de1fbbeedf 100644
|
|
--- a/drivers/usb/usbip/stub_dev.c
|
|
+++ b/drivers/usb/usbip/stub_dev.c
|
|
@@ -46,6 +46,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
|
|
int sockfd = 0;
|
|
struct socket *socket;
|
|
int rv;
|
|
+ struct task_struct *tcp_rx = NULL;
|
|
+ struct task_struct *tcp_tx = NULL;
|
|
|
|
if (!sdev) {
|
|
dev_err(dev, "sdev is null\n");
|
|
@@ -69,23 +71,47 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
|
|
}
|
|
|
|
socket = sockfd_lookup(sockfd, &err);
|
|
- if (!socket)
|
|
+ if (!socket) {
|
|
+ dev_err(dev, "failed to lookup sock");
|
|
goto err;
|
|
+ }
|
|
|
|
- sdev->ud.tcp_socket = socket;
|
|
- sdev->ud.sockfd = sockfd;
|
|
+ if (socket->type != SOCK_STREAM) {
|
|
+ dev_err(dev, "Expecting SOCK_STREAM - found %d",
|
|
+ socket->type);
|
|
+ goto sock_err;
|
|
+ }
|
|
|
|
+ /* unlock and create threads and get tasks */
|
|
spin_unlock_irq(&sdev->ud.lock);
|
|
+ tcp_rx = kthread_create(stub_rx_loop, &sdev->ud, "stub_rx");
|
|
+ if (IS_ERR(tcp_rx)) {
|
|
+ sockfd_put(socket);
|
|
+ return -EINVAL;
|
|
+ }
|
|
+ tcp_tx = kthread_create(stub_tx_loop, &sdev->ud, "stub_tx");
|
|
+ if (IS_ERR(tcp_tx)) {
|
|
+ kthread_stop(tcp_rx);
|
|
+ sockfd_put(socket);
|
|
+ return -EINVAL;
|
|
+ }
|
|
|
|
- sdev->ud.tcp_rx = kthread_get_run(stub_rx_loop, &sdev->ud,
|
|
- "stub_rx");
|
|
- sdev->ud.tcp_tx = kthread_get_run(stub_tx_loop, &sdev->ud,
|
|
- "stub_tx");
|
|
+ /* get task structs now */
|
|
+ get_task_struct(tcp_rx);
|
|
+ get_task_struct(tcp_tx);
|
|
|
|
+ /* lock and update sdev->ud state */
|
|
spin_lock_irq(&sdev->ud.lock);
|
|
+ sdev->ud.tcp_socket = socket;
|
|
+ sdev->ud.sockfd = sockfd;
|
|
+ sdev->ud.tcp_rx = tcp_rx;
|
|
+ sdev->ud.tcp_tx = tcp_tx;
|
|
sdev->ud.status = SDEV_ST_USED;
|
|
spin_unlock_irq(&sdev->ud.lock);
|
|
|
|
+ wake_up_process(sdev->ud.tcp_rx);
|
|
+ wake_up_process(sdev->ud.tcp_tx);
|
|
+
|
|
} else {
|
|
dev_info(dev, "stub down\n");
|
|
|
|
@@ -100,6 +126,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
|
|
|
|
return count;
|
|
|
|
+sock_err:
|
|
+ sockfd_put(socket);
|
|
err:
|
|
spin_unlock_irq(&sdev->ud.lock);
|
|
return -EINVAL;
|
|
diff --git a/drivers/usb/usbip/vhci_sysfs.c b/drivers/usb/usbip/vhci_sysfs.c
|
|
index be37aec250c2b..e64ea314930be 100644
|
|
--- a/drivers/usb/usbip/vhci_sysfs.c
|
|
+++ b/drivers/usb/usbip/vhci_sysfs.c
|
|
@@ -312,6 +312,8 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
|
|
struct vhci *vhci;
|
|
int err;
|
|
unsigned long flags;
|
|
+ struct task_struct *tcp_rx = NULL;
|
|
+ struct task_struct *tcp_tx = NULL;
|
|
|
|
/*
|
|
* @rhport: port number of vhci_hcd
|
|
@@ -349,12 +351,35 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
|
|
|
|
/* Extract socket from fd. */
|
|
socket = sockfd_lookup(sockfd, &err);
|
|
- if (!socket)
|
|
+ if (!socket) {
|
|
+ dev_err(dev, "failed to lookup sock");
|
|
return -EINVAL;
|
|
+ }
|
|
+ if (socket->type != SOCK_STREAM) {
|
|
+ dev_err(dev, "Expecting SOCK_STREAM - found %d",
|
|
+ socket->type);
|
|
+ sockfd_put(socket);
|
|
+ return -EINVAL;
|
|
+ }
|
|
+
|
|
+ /* create threads before locking */
|
|
+ tcp_rx = kthread_create(vhci_rx_loop, &vdev->ud, "vhci_rx");
|
|
+ if (IS_ERR(tcp_rx)) {
|
|
+ sockfd_put(socket);
|
|
+ return -EINVAL;
|
|
+ }
|
|
+ tcp_tx = kthread_create(vhci_tx_loop, &vdev->ud, "vhci_tx");
|
|
+ if (IS_ERR(tcp_tx)) {
|
|
+ kthread_stop(tcp_rx);
|
|
+ sockfd_put(socket);
|
|
+ return -EINVAL;
|
|
+ }
|
|
|
|
- /* now need lock until setting vdev status as used */
|
|
+ /* get task structs now */
|
|
+ get_task_struct(tcp_rx);
|
|
+ get_task_struct(tcp_tx);
|
|
|
|
- /* begin a lock */
|
|
+ /* now begin lock until setting vdev status set */
|
|
spin_lock_irqsave(&vhci->lock, flags);
|
|
spin_lock(&vdev->ud.lock);
|
|
|
|
@@ -364,6 +389,8 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
|
|
spin_unlock_irqrestore(&vhci->lock, flags);
|
|
|
|
sockfd_put(socket);
|
|
+ kthread_stop_put(tcp_rx);
|
|
+ kthread_stop_put(tcp_tx);
|
|
|
|
dev_err(dev, "port %d already used\n", rhport);
|
|
/*
|
|
@@ -382,14 +409,16 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
|
|
vdev->speed = speed;
|
|
vdev->ud.sockfd = sockfd;
|
|
vdev->ud.tcp_socket = socket;
|
|
+ vdev->ud.tcp_rx = tcp_rx;
|
|
+ vdev->ud.tcp_tx = tcp_tx;
|
|
vdev->ud.status = VDEV_ST_NOTASSIGNED;
|
|
|
|
spin_unlock(&vdev->ud.lock);
|
|
spin_unlock_irqrestore(&vhci->lock, flags);
|
|
/* end the lock */
|
|
|
|
- vdev->ud.tcp_rx = kthread_get_run(vhci_rx_loop, &vdev->ud, "vhci_rx");
|
|
- vdev->ud.tcp_tx = kthread_get_run(vhci_tx_loop, &vdev->ud, "vhci_tx");
|
|
+ wake_up_process(vdev->ud.tcp_rx);
|
|
+ wake_up_process(vdev->ud.tcp_tx);
|
|
|
|
rh_port_connect(vdev, speed);
|
|
|
|
diff --git a/drivers/usb/usbip/vudc_sysfs.c b/drivers/usb/usbip/vudc_sysfs.c
|
|
index 100f680c572ae..a3ec39fc61778 100644
|
|
--- a/drivers/usb/usbip/vudc_sysfs.c
|
|
+++ b/drivers/usb/usbip/vudc_sysfs.c
|
|
@@ -90,8 +90,9 @@ unlock:
|
|
}
|
|
static BIN_ATTR_RO(dev_desc, sizeof(struct usb_device_descriptor));
|
|
|
|
-static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *attr,
|
|
- const char *in, size_t count)
|
|
+static ssize_t usbip_sockfd_store(struct device *dev,
|
|
+ struct device_attribute *attr,
|
|
+ const char *in, size_t count)
|
|
{
|
|
struct vudc *udc = (struct vudc *) dev_get_drvdata(dev);
|
|
int rv;
|
|
@@ -100,6 +101,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
|
|
struct socket *socket;
|
|
unsigned long flags;
|
|
int ret;
|
|
+ struct task_struct *tcp_rx = NULL;
|
|
+ struct task_struct *tcp_tx = NULL;
|
|
|
|
rv = kstrtoint(in, 0, &sockfd);
|
|
if (rv != 0)
|
|
@@ -138,24 +141,54 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
|
|
goto unlock_ud;
|
|
}
|
|
|
|
- udc->ud.tcp_socket = socket;
|
|
+ if (socket->type != SOCK_STREAM) {
|
|
+ dev_err(dev, "Expecting SOCK_STREAM - found %d",
|
|
+ socket->type);
|
|
+ ret = -EINVAL;
|
|
+ goto sock_err;
|
|
+ }
|
|
|
|
+ /* unlock and create threads and get tasks */
|
|
spin_unlock_irq(&udc->ud.lock);
|
|
spin_unlock_irqrestore(&udc->lock, flags);
|
|
|
|
- udc->ud.tcp_rx = kthread_get_run(&v_rx_loop,
|
|
- &udc->ud, "vudc_rx");
|
|
- udc->ud.tcp_tx = kthread_get_run(&v_tx_loop,
|
|
- &udc->ud, "vudc_tx");
|
|
+ tcp_rx = kthread_create(&v_rx_loop, &udc->ud, "vudc_rx");
|
|
+ if (IS_ERR(tcp_rx)) {
|
|
+ sockfd_put(socket);
|
|
+ return -EINVAL;
|
|
+ }
|
|
+ tcp_tx = kthread_create(&v_tx_loop, &udc->ud, "vudc_tx");
|
|
+ if (IS_ERR(tcp_tx)) {
|
|
+ kthread_stop(tcp_rx);
|
|
+ sockfd_put(socket);
|
|
+ return -EINVAL;
|
|
+ }
|
|
+
|
|
+ /* get task structs now */
|
|
+ get_task_struct(tcp_rx);
|
|
+ get_task_struct(tcp_tx);
|
|
|
|
+ /* lock and update udc->ud state */
|
|
spin_lock_irqsave(&udc->lock, flags);
|
|
spin_lock_irq(&udc->ud.lock);
|
|
+
|
|
+ udc->ud.tcp_socket = socket;
|
|
+ udc->ud.tcp_rx = tcp_rx;
|
|
+ udc->ud.tcp_rx = tcp_tx;
|
|
udc->ud.status = SDEV_ST_USED;
|
|
+
|
|
spin_unlock_irq(&udc->ud.lock);
|
|
|
|
ktime_get_ts64(&udc->start_time);
|
|
v_start_timer(udc);
|
|
udc->connected = 1;
|
|
+
|
|
+ spin_unlock_irqrestore(&udc->lock, flags);
|
|
+
|
|
+ wake_up_process(udc->ud.tcp_rx);
|
|
+ wake_up_process(udc->ud.tcp_tx);
|
|
+ return count;
|
|
+
|
|
} else {
|
|
if (!udc->connected) {
|
|
dev_err(dev, "Device not connected");
|
|
@@ -177,6 +210,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
|
|
|
|
return count;
|
|
|
|
+sock_err:
|
|
+ sockfd_put(socket);
|
|
unlock_ud:
|
|
spin_unlock_irq(&udc->ud.lock);
|
|
unlock:
|
|
diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
|
|
index da87f3a1e351b..b8f2f971c2f0f 100644
|
|
--- a/drivers/xen/events/events_2l.c
|
|
+++ b/drivers/xen/events/events_2l.c
|
|
@@ -47,6 +47,11 @@ static unsigned evtchn_2l_max_channels(void)
|
|
return EVTCHN_2L_NR_CHANNELS;
|
|
}
|
|
|
|
+static void evtchn_2l_remove(evtchn_port_t evtchn, unsigned int cpu)
|
|
+{
|
|
+ clear_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, cpu)));
|
|
+}
|
|
+
|
|
static void evtchn_2l_bind_to_cpu(evtchn_port_t evtchn, unsigned int cpu,
|
|
unsigned int old_cpu)
|
|
{
|
|
@@ -72,12 +77,6 @@ static bool evtchn_2l_is_pending(evtchn_port_t port)
|
|
return sync_test_bit(port, BM(&s->evtchn_pending[0]));
|
|
}
|
|
|
|
-static bool evtchn_2l_test_and_set_mask(evtchn_port_t port)
|
|
-{
|
|
- struct shared_info *s = HYPERVISOR_shared_info;
|
|
- return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0]));
|
|
-}
|
|
-
|
|
static void evtchn_2l_mask(evtchn_port_t port)
|
|
{
|
|
struct shared_info *s = HYPERVISOR_shared_info;
|
|
@@ -355,18 +354,27 @@ static void evtchn_2l_resume(void)
|
|
EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
|
|
}
|
|
|
|
+static int evtchn_2l_percpu_deinit(unsigned int cpu)
|
|
+{
|
|
+ memset(per_cpu(cpu_evtchn_mask, cpu), 0, sizeof(xen_ulong_t) *
|
|
+ EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
|
|
+
|
|
+ return 0;
|
|
+}
|
|
+
|
|
static const struct evtchn_ops evtchn_ops_2l = {
|
|
.max_channels = evtchn_2l_max_channels,
|
|
.nr_channels = evtchn_2l_max_channels,
|
|
+ .remove = evtchn_2l_remove,
|
|
.bind_to_cpu = evtchn_2l_bind_to_cpu,
|
|
.clear_pending = evtchn_2l_clear_pending,
|
|
.set_pending = evtchn_2l_set_pending,
|
|
.is_pending = evtchn_2l_is_pending,
|
|
- .test_and_set_mask = evtchn_2l_test_and_set_mask,
|
|
.mask = evtchn_2l_mask,
|
|
.unmask = evtchn_2l_unmask,
|
|
.handle_events = evtchn_2l_handle_events,
|
|
.resume = evtchn_2l_resume,
|
|
+ .percpu_deinit = evtchn_2l_percpu_deinit,
|
|
};
|
|
|
|
void __init xen_evtchn_2l_init(void)
|
|
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
|
|
index e850f79351cbb..d9148609bd09a 100644
|
|
--- a/drivers/xen/events/events_base.c
|
|
+++ b/drivers/xen/events/events_base.c
|
|
@@ -97,13 +97,19 @@ struct irq_info {
|
|
short refcnt;
|
|
u8 spurious_cnt;
|
|
u8 is_accounted;
|
|
- enum xen_irq_type type; /* type */
|
|
+ short type; /* type: IRQT_* */
|
|
+ u8 mask_reason; /* Why is event channel masked */
|
|
+#define EVT_MASK_REASON_EXPLICIT 0x01
|
|
+#define EVT_MASK_REASON_TEMPORARY 0x02
|
|
+#define EVT_MASK_REASON_EOI_PENDING 0x04
|
|
+ u8 is_active; /* Is event just being handled? */
|
|
unsigned irq;
|
|
evtchn_port_t evtchn; /* event channel */
|
|
unsigned short cpu; /* cpu bound */
|
|
unsigned short eoi_cpu; /* EOI must happen on this cpu-1 */
|
|
unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */
|
|
u64 eoi_time; /* Time in jiffies when to EOI. */
|
|
+ spinlock_t lock;
|
|
|
|
union {
|
|
unsigned short virq;
|
|
@@ -152,6 +158,7 @@ static DEFINE_RWLOCK(evtchn_rwlock);
|
|
* evtchn_rwlock
|
|
* IRQ-desc lock
|
|
* percpu eoi_list_lock
|
|
+ * irq_info->lock
|
|
*/
|
|
|
|
static LIST_HEAD(xen_irq_list_head);
|
|
@@ -302,6 +309,8 @@ static int xen_irq_info_common_setup(struct irq_info *info,
|
|
info->irq = irq;
|
|
info->evtchn = evtchn;
|
|
info->cpu = cpu;
|
|
+ info->mask_reason = EVT_MASK_REASON_EXPLICIT;
|
|
+ spin_lock_init(&info->lock);
|
|
|
|
ret = set_evtchn_to_irq(evtchn, irq);
|
|
if (ret < 0)
|
|
@@ -368,6 +377,7 @@ static int xen_irq_info_pirq_setup(unsigned irq,
|
|
static void xen_irq_info_cleanup(struct irq_info *info)
|
|
{
|
|
set_evtchn_to_irq(info->evtchn, -1);
|
|
+ xen_evtchn_port_remove(info->evtchn, info->cpu);
|
|
info->evtchn = 0;
|
|
channels_on_cpu_dec(info);
|
|
}
|
|
@@ -449,6 +459,34 @@ unsigned int cpu_from_evtchn(evtchn_port_t evtchn)
|
|
return ret;
|
|
}
|
|
|
|
+static void do_mask(struct irq_info *info, u8 reason)
|
|
+{
|
|
+ unsigned long flags;
|
|
+
|
|
+ spin_lock_irqsave(&info->lock, flags);
|
|
+
|
|
+ if (!info->mask_reason)
|
|
+ mask_evtchn(info->evtchn);
|
|
+
|
|
+ info->mask_reason |= reason;
|
|
+
|
|
+ spin_unlock_irqrestore(&info->lock, flags);
|
|
+}
|
|
+
|
|
+static void do_unmask(struct irq_info *info, u8 reason)
|
|
+{
|
|
+ unsigned long flags;
|
|
+
|
|
+ spin_lock_irqsave(&info->lock, flags);
|
|
+
|
|
+ info->mask_reason &= ~reason;
|
|
+
|
|
+ if (!info->mask_reason)
|
|
+ unmask_evtchn(info->evtchn);
|
|
+
|
|
+ spin_unlock_irqrestore(&info->lock, flags);
|
|
+}
|
|
+
|
|
#ifdef CONFIG_X86
|
|
static bool pirq_check_eoi_map(unsigned irq)
|
|
{
|
|
@@ -585,7 +623,7 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
|
|
}
|
|
|
|
info->eoi_time = 0;
|
|
- unmask_evtchn(evtchn);
|
|
+ do_unmask(info, EVT_MASK_REASON_EOI_PENDING);
|
|
}
|
|
|
|
static void xen_irq_lateeoi_worker(struct work_struct *work)
|
|
@@ -754,6 +792,12 @@ static void xen_evtchn_close(evtchn_port_t port)
|
|
BUG();
|
|
}
|
|
|
|
+static void event_handler_exit(struct irq_info *info)
|
|
+{
|
|
+ smp_store_release(&info->is_active, 0);
|
|
+ clear_evtchn(info->evtchn);
|
|
+}
|
|
+
|
|
static void pirq_query_unmask(int irq)
|
|
{
|
|
struct physdev_irq_status_query irq_status;
|
|
@@ -772,14 +816,15 @@ static void pirq_query_unmask(int irq)
|
|
|
|
static void eoi_pirq(struct irq_data *data)
|
|
{
|
|
- evtchn_port_t evtchn = evtchn_from_irq(data->irq);
|
|
+ struct irq_info *info = info_for_irq(data->irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
struct physdev_eoi eoi = { .irq = pirq_from_irq(data->irq) };
|
|
int rc = 0;
|
|
|
|
if (!VALID_EVTCHN(evtchn))
|
|
return;
|
|
|
|
- clear_evtchn(evtchn);
|
|
+ event_handler_exit(info);
|
|
|
|
if (pirq_needs_eoi(data->irq)) {
|
|
rc = HYPERVISOR_physdev_op(PHYSDEVOP_eoi, &eoi);
|
|
@@ -830,7 +875,8 @@ static unsigned int __startup_pirq(unsigned int irq)
|
|
goto err;
|
|
|
|
out:
|
|
- unmask_evtchn(evtchn);
|
|
+ do_unmask(info, EVT_MASK_REASON_EXPLICIT);
|
|
+
|
|
eoi_pirq(irq_get_irq_data(irq));
|
|
|
|
return 0;
|
|
@@ -857,7 +903,7 @@ static void shutdown_pirq(struct irq_data *data)
|
|
if (!VALID_EVTCHN(evtchn))
|
|
return;
|
|
|
|
- mask_evtchn(evtchn);
|
|
+ do_mask(info, EVT_MASK_REASON_EXPLICIT);
|
|
xen_evtchn_close(evtchn);
|
|
xen_irq_info_cleanup(info);
|
|
}
|
|
@@ -1602,6 +1648,8 @@ void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl)
|
|
}
|
|
|
|
info = info_for_irq(irq);
|
|
+ if (xchg_acquire(&info->is_active, 1))
|
|
+ return;
|
|
|
|
if (ctrl->defer_eoi) {
|
|
info->eoi_cpu = smp_processor_id();
|
|
@@ -1690,10 +1738,10 @@ void rebind_evtchn_irq(evtchn_port_t evtchn, int irq)
|
|
}
|
|
|
|
/* Rebind an evtchn so that it gets delivered to a specific cpu */
|
|
-static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
|
|
+static int xen_rebind_evtchn_to_cpu(struct irq_info *info, unsigned int tcpu)
|
|
{
|
|
struct evtchn_bind_vcpu bind_vcpu;
|
|
- int masked;
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
|
|
if (!VALID_EVTCHN(evtchn))
|
|
return -1;
|
|
@@ -1709,7 +1757,7 @@ static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
|
|
* Mask the event while changing the VCPU binding to prevent
|
|
* it being delivered on an unexpected VCPU.
|
|
*/
|
|
- masked = test_and_set_mask(evtchn);
|
|
+ do_mask(info, EVT_MASK_REASON_TEMPORARY);
|
|
|
|
/*
|
|
* If this fails, it usually just indicates that we're dealing with a
|
|
@@ -1719,8 +1767,7 @@ static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
|
|
if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_vcpu, &bind_vcpu) >= 0)
|
|
bind_evtchn_to_cpu(evtchn, tcpu, false);
|
|
|
|
- if (!masked)
|
|
- unmask_evtchn(evtchn);
|
|
+ do_unmask(info, EVT_MASK_REASON_TEMPORARY);
|
|
|
|
return 0;
|
|
}
|
|
@@ -1759,7 +1806,7 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
|
|
unsigned int tcpu = select_target_cpu(dest);
|
|
int ret;
|
|
|
|
- ret = xen_rebind_evtchn_to_cpu(evtchn_from_irq(data->irq), tcpu);
|
|
+ ret = xen_rebind_evtchn_to_cpu(info_for_irq(data->irq), tcpu);
|
|
if (!ret)
|
|
irq_data_update_effective_affinity(data, cpumask_of(tcpu));
|
|
|
|
@@ -1768,28 +1815,29 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
|
|
|
|
static void enable_dynirq(struct irq_data *data)
|
|
{
|
|
- evtchn_port_t evtchn = evtchn_from_irq(data->irq);
|
|
+ struct irq_info *info = info_for_irq(data->irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
|
|
if (VALID_EVTCHN(evtchn))
|
|
- unmask_evtchn(evtchn);
|
|
+ do_unmask(info, EVT_MASK_REASON_EXPLICIT);
|
|
}
|
|
|
|
static void disable_dynirq(struct irq_data *data)
|
|
{
|
|
- evtchn_port_t evtchn = evtchn_from_irq(data->irq);
|
|
+ struct irq_info *info = info_for_irq(data->irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
|
|
if (VALID_EVTCHN(evtchn))
|
|
- mask_evtchn(evtchn);
|
|
+ do_mask(info, EVT_MASK_REASON_EXPLICIT);
|
|
}
|
|
|
|
static void ack_dynirq(struct irq_data *data)
|
|
{
|
|
- evtchn_port_t evtchn = evtchn_from_irq(data->irq);
|
|
+ struct irq_info *info = info_for_irq(data->irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
|
|
- if (!VALID_EVTCHN(evtchn))
|
|
- return;
|
|
-
|
|
- clear_evtchn(evtchn);
|
|
+ if (VALID_EVTCHN(evtchn))
|
|
+ event_handler_exit(info);
|
|
}
|
|
|
|
static void mask_ack_dynirq(struct irq_data *data)
|
|
@@ -1798,18 +1846,39 @@ static void mask_ack_dynirq(struct irq_data *data)
|
|
ack_dynirq(data);
|
|
}
|
|
|
|
+static void lateeoi_ack_dynirq(struct irq_data *data)
|
|
+{
|
|
+ struct irq_info *info = info_for_irq(data->irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
+
|
|
+ if (VALID_EVTCHN(evtchn)) {
|
|
+ do_mask(info, EVT_MASK_REASON_EOI_PENDING);
|
|
+ event_handler_exit(info);
|
|
+ }
|
|
+}
|
|
+
|
|
+static void lateeoi_mask_ack_dynirq(struct irq_data *data)
|
|
+{
|
|
+ struct irq_info *info = info_for_irq(data->irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
+
|
|
+ if (VALID_EVTCHN(evtchn)) {
|
|
+ do_mask(info, EVT_MASK_REASON_EXPLICIT);
|
|
+ event_handler_exit(info);
|
|
+ }
|
|
+}
|
|
+
|
|
static int retrigger_dynirq(struct irq_data *data)
|
|
{
|
|
- evtchn_port_t evtchn = evtchn_from_irq(data->irq);
|
|
- int masked;
|
|
+ struct irq_info *info = info_for_irq(data->irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
|
|
if (!VALID_EVTCHN(evtchn))
|
|
return 0;
|
|
|
|
- masked = test_and_set_mask(evtchn);
|
|
+ do_mask(info, EVT_MASK_REASON_TEMPORARY);
|
|
set_evtchn(evtchn);
|
|
- if (!masked)
|
|
- unmask_evtchn(evtchn);
|
|
+ do_unmask(info, EVT_MASK_REASON_TEMPORARY);
|
|
|
|
return 1;
|
|
}
|
|
@@ -1908,10 +1977,11 @@ static void restore_cpu_ipis(unsigned int cpu)
|
|
/* Clear an irq's pending state, in preparation for polling on it */
|
|
void xen_clear_irq_pending(int irq)
|
|
{
|
|
- evtchn_port_t evtchn = evtchn_from_irq(irq);
|
|
+ struct irq_info *info = info_for_irq(irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
|
|
if (VALID_EVTCHN(evtchn))
|
|
- clear_evtchn(evtchn);
|
|
+ event_handler_exit(info);
|
|
}
|
|
EXPORT_SYMBOL(xen_clear_irq_pending);
|
|
void xen_set_irq_pending(int irq)
|
|
@@ -2023,8 +2093,8 @@ static struct irq_chip xen_lateeoi_chip __read_mostly = {
|
|
.irq_mask = disable_dynirq,
|
|
.irq_unmask = enable_dynirq,
|
|
|
|
- .irq_ack = mask_ack_dynirq,
|
|
- .irq_mask_ack = mask_ack_dynirq,
|
|
+ .irq_ack = lateeoi_ack_dynirq,
|
|
+ .irq_mask_ack = lateeoi_mask_ack_dynirq,
|
|
|
|
.irq_set_affinity = set_affinity_irq,
|
|
.irq_retrigger = retrigger_dynirq,
|
|
diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
|
|
index b234f1766810c..ad9fe51d3fb33 100644
|
|
--- a/drivers/xen/events/events_fifo.c
|
|
+++ b/drivers/xen/events/events_fifo.c
|
|
@@ -209,12 +209,6 @@ static bool evtchn_fifo_is_pending(evtchn_port_t port)
|
|
return sync_test_bit(EVTCHN_FIFO_BIT(PENDING, word), BM(word));
|
|
}
|
|
|
|
-static bool evtchn_fifo_test_and_set_mask(evtchn_port_t port)
|
|
-{
|
|
- event_word_t *word = event_word_from_port(port);
|
|
- return sync_test_and_set_bit(EVTCHN_FIFO_BIT(MASKED, word), BM(word));
|
|
-}
|
|
-
|
|
static void evtchn_fifo_mask(evtchn_port_t port)
|
|
{
|
|
event_word_t *word = event_word_from_port(port);
|
|
@@ -423,7 +417,6 @@ static const struct evtchn_ops evtchn_ops_fifo = {
|
|
.clear_pending = evtchn_fifo_clear_pending,
|
|
.set_pending = evtchn_fifo_set_pending,
|
|
.is_pending = evtchn_fifo_is_pending,
|
|
- .test_and_set_mask = evtchn_fifo_test_and_set_mask,
|
|
.mask = evtchn_fifo_mask,
|
|
.unmask = evtchn_fifo_unmask,
|
|
.handle_events = evtchn_fifo_handle_events,
|
|
diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h
|
|
index 0a97c0549db76..4d3398eff9cdf 100644
|
|
--- a/drivers/xen/events/events_internal.h
|
|
+++ b/drivers/xen/events/events_internal.h
|
|
@@ -14,13 +14,13 @@ struct evtchn_ops {
|
|
unsigned (*nr_channels)(void);
|
|
|
|
int (*setup)(evtchn_port_t port);
|
|
+ void (*remove)(evtchn_port_t port, unsigned int cpu);
|
|
void (*bind_to_cpu)(evtchn_port_t evtchn, unsigned int cpu,
|
|
unsigned int old_cpu);
|
|
|
|
void (*clear_pending)(evtchn_port_t port);
|
|
void (*set_pending)(evtchn_port_t port);
|
|
bool (*is_pending)(evtchn_port_t port);
|
|
- bool (*test_and_set_mask)(evtchn_port_t port);
|
|
void (*mask)(evtchn_port_t port);
|
|
void (*unmask)(evtchn_port_t port);
|
|
|
|
@@ -54,6 +54,13 @@ static inline int xen_evtchn_port_setup(evtchn_port_t evtchn)
|
|
return 0;
|
|
}
|
|
|
|
+static inline void xen_evtchn_port_remove(evtchn_port_t evtchn,
|
|
+ unsigned int cpu)
|
|
+{
|
|
+ if (evtchn_ops->remove)
|
|
+ evtchn_ops->remove(evtchn, cpu);
|
|
+}
|
|
+
|
|
static inline void xen_evtchn_port_bind_to_cpu(evtchn_port_t evtchn,
|
|
unsigned int cpu,
|
|
unsigned int old_cpu)
|
|
@@ -76,11 +83,6 @@ static inline bool test_evtchn(evtchn_port_t port)
|
|
return evtchn_ops->is_pending(port);
|
|
}
|
|
|
|
-static inline bool test_and_set_mask(evtchn_port_t port)
|
|
-{
|
|
- return evtchn_ops->test_and_set_mask(port);
|
|
-}
|
|
-
|
|
static inline void mask_evtchn(evtchn_port_t port)
|
|
{
|
|
return evtchn_ops->mask(port);
|
|
diff --git a/fs/binfmt_misc.c b/fs/binfmt_misc.c
|
|
index 3880a82da1dc5..11b5bf2419555 100644
|
|
--- a/fs/binfmt_misc.c
|
|
+++ b/fs/binfmt_misc.c
|
|
@@ -647,12 +647,24 @@ static ssize_t bm_register_write(struct file *file, const char __user *buffer,
|
|
struct super_block *sb = file_inode(file)->i_sb;
|
|
struct dentry *root = sb->s_root, *dentry;
|
|
int err = 0;
|
|
+ struct file *f = NULL;
|
|
|
|
e = create_entry(buffer, count);
|
|
|
|
if (IS_ERR(e))
|
|
return PTR_ERR(e);
|
|
|
|
+ if (e->flags & MISC_FMT_OPEN_FILE) {
|
|
+ f = open_exec(e->interpreter);
|
|
+ if (IS_ERR(f)) {
|
|
+ pr_notice("register: failed to install interpreter file %s\n",
|
|
+ e->interpreter);
|
|
+ kfree(e);
|
|
+ return PTR_ERR(f);
|
|
+ }
|
|
+ e->interp_file = f;
|
|
+ }
|
|
+
|
|
inode_lock(d_inode(root));
|
|
dentry = lookup_one_len(e->name, root, strlen(e->name));
|
|
err = PTR_ERR(dentry);
|
|
@@ -676,21 +688,6 @@ static ssize_t bm_register_write(struct file *file, const char __user *buffer,
|
|
goto out2;
|
|
}
|
|
|
|
- if (e->flags & MISC_FMT_OPEN_FILE) {
|
|
- struct file *f;
|
|
-
|
|
- f = open_exec(e->interpreter);
|
|
- if (IS_ERR(f)) {
|
|
- err = PTR_ERR(f);
|
|
- pr_notice("register: failed to install interpreter file %s\n", e->interpreter);
|
|
- simple_release_fs(&bm_mnt, &entry_count);
|
|
- iput(inode);
|
|
- inode = NULL;
|
|
- goto out2;
|
|
- }
|
|
- e->interp_file = f;
|
|
- }
|
|
-
|
|
e->dentry = dget(dentry);
|
|
inode->i_private = e;
|
|
inode->i_fop = &bm_entry_operations;
|
|
@@ -707,6 +704,8 @@ out:
|
|
inode_unlock(d_inode(root));
|
|
|
|
if (err) {
|
|
+ if (f)
|
|
+ filp_close(f, NULL);
|
|
kfree(e);
|
|
return err;
|
|
}
|
|
diff --git a/fs/block_dev.c b/fs/block_dev.c
|
|
index 235b5042672e9..c33151020bcd7 100644
|
|
--- a/fs/block_dev.c
|
|
+++ b/fs/block_dev.c
|
|
@@ -118,13 +118,22 @@ int truncate_bdev_range(struct block_device *bdev, fmode_t mode,
|
|
if (!(mode & FMODE_EXCL)) {
|
|
int err = bd_prepare_to_claim(bdev, truncate_bdev_range);
|
|
if (err)
|
|
- return err;
|
|
+ goto invalidate;
|
|
}
|
|
|
|
truncate_inode_pages_range(bdev->bd_inode->i_mapping, lstart, lend);
|
|
if (!(mode & FMODE_EXCL))
|
|
bd_abort_claiming(bdev, truncate_bdev_range);
|
|
return 0;
|
|
+
|
|
+invalidate:
|
|
+ /*
|
|
+ * Someone else has handle exclusively open. Try invalidating instead.
|
|
+ * The 'end' argument is inclusive so the rounding is safe.
|
|
+ */
|
|
+ return invalidate_inode_pages2_range(bdev->bd_inode->i_mapping,
|
|
+ lstart >> PAGE_SHIFT,
|
|
+ lend >> PAGE_SHIFT);
|
|
}
|
|
EXPORT_SYMBOL(truncate_bdev_range);
|
|
|
|
diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
|
|
index ab883e84e116b..8a6a1772590bf 100644
|
|
--- a/fs/cifs/cifsfs.c
|
|
+++ b/fs/cifs/cifsfs.c
|
|
@@ -290,7 +290,7 @@ cifs_statfs(struct dentry *dentry, struct kstatfs *buf)
|
|
rc = server->ops->queryfs(xid, tcon, cifs_sb, buf);
|
|
|
|
free_xid(xid);
|
|
- return 0;
|
|
+ return rc;
|
|
}
|
|
|
|
static long cifs_fallocate(struct file *file, int mode, loff_t off, loff_t len)
|
|
diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
|
|
index 50fcb65920e80..089a3916c639f 100644
|
|
--- a/fs/cifs/cifsglob.h
|
|
+++ b/fs/cifs/cifsglob.h
|
|
@@ -256,7 +256,7 @@ struct smb_version_operations {
|
|
/* verify the message */
|
|
int (*check_message)(char *, unsigned int, struct TCP_Server_Info *);
|
|
bool (*is_oplock_break)(char *, struct TCP_Server_Info *);
|
|
- int (*handle_cancelled_mid)(char *, struct TCP_Server_Info *);
|
|
+ int (*handle_cancelled_mid)(struct mid_q_entry *, struct TCP_Server_Info *);
|
|
void (*downgrade_oplock)(struct TCP_Server_Info *server,
|
|
struct cifsInodeInfo *cinode, __u32 oplock,
|
|
unsigned int epoch, bool *purge_cache);
|
|
@@ -1701,10 +1701,11 @@ static inline bool is_retryable_error(int error)
|
|
#define CIFS_NO_RSP_BUF 0x040 /* no response buffer required */
|
|
|
|
/* Type of request operation */
|
|
-#define CIFS_ECHO_OP 0x080 /* echo request */
|
|
-#define CIFS_OBREAK_OP 0x0100 /* oplock break request */
|
|
-#define CIFS_NEG_OP 0x0200 /* negotiate request */
|
|
-#define CIFS_OP_MASK 0x0380 /* mask request type */
|
|
+#define CIFS_ECHO_OP 0x080 /* echo request */
|
|
+#define CIFS_OBREAK_OP 0x0100 /* oplock break request */
|
|
+#define CIFS_NEG_OP 0x0200 /* negotiate request */
|
|
+#define CIFS_CP_CREATE_CLOSE_OP 0x0400 /* compound create+close request */
|
|
+#define CIFS_OP_MASK 0x0780 /* mask request type */
|
|
|
|
#define CIFS_HAS_CREDITS 0x0400 /* already has credits */
|
|
#define CIFS_TRANSFORM_REQ 0x0800 /* transform request before sending */
|
|
diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
|
|
index 1439d3c9ff773..70d0f0388af47 100644
|
|
--- a/fs/cifs/connect.c
|
|
+++ b/fs/cifs/connect.c
|
|
@@ -1405,6 +1405,11 @@ smbd_connected:
|
|
tcp_ses->min_offload = ctx->min_offload;
|
|
tcp_ses->tcpStatus = CifsNeedNegotiate;
|
|
|
|
+ if ((ctx->max_credits < 20) || (ctx->max_credits > 60000))
|
|
+ tcp_ses->max_credits = SMB2_MAX_CREDITS_AVAILABLE;
|
|
+ else
|
|
+ tcp_ses->max_credits = ctx->max_credits;
|
|
+
|
|
tcp_ses->nr_targets = 1;
|
|
tcp_ses->ignore_signature = ctx->ignore_signature;
|
|
/* thread spawned, put it on the list */
|
|
@@ -2806,11 +2811,6 @@ static int mount_get_conns(struct smb3_fs_context *ctx, struct cifs_sb_info *cif
|
|
|
|
*nserver = server;
|
|
|
|
- if ((ctx->max_credits < 20) || (ctx->max_credits > 60000))
|
|
- server->max_credits = SMB2_MAX_CREDITS_AVAILABLE;
|
|
- else
|
|
- server->max_credits = ctx->max_credits;
|
|
-
|
|
/* get a reference to a SMB session */
|
|
ses = cifs_get_smb_ses(server, ctx);
|
|
if (IS_ERR(ses)) {
|
|
diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
|
|
index 213465718fa89..dea4959989b50 100644
|
|
--- a/fs/cifs/sess.c
|
|
+++ b/fs/cifs/sess.c
|
|
@@ -230,6 +230,7 @@ cifs_ses_add_channel(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses,
|
|
ctx.noautotune = ses->server->noautotune;
|
|
ctx.sockopt_tcp_nodelay = ses->server->tcp_nodelay;
|
|
ctx.echo_interval = ses->server->echo_interval / HZ;
|
|
+ ctx.max_credits = ses->server->max_credits;
|
|
|
|
/*
|
|
* This will be used for encoding/decoding user/domain/pw
|
|
diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
|
|
index 1f900b81c34ae..a718dc77e604e 100644
|
|
--- a/fs/cifs/smb2inode.c
|
|
+++ b/fs/cifs/smb2inode.c
|
|
@@ -358,6 +358,7 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
|
|
if (cfile)
|
|
goto after_close;
|
|
/* Close */
|
|
+ flags |= CIFS_CP_CREATE_CLOSE_OP;
|
|
rqst[num_rqst].rq_iov = &vars->close_iov[0];
|
|
rqst[num_rqst].rq_nvec = 1;
|
|
rc = SMB2_close_init(tcon, server,
|
|
diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
|
|
index 60d4bd1eae2b3..d9073b569e174 100644
|
|
--- a/fs/cifs/smb2misc.c
|
|
+++ b/fs/cifs/smb2misc.c
|
|
@@ -844,14 +844,14 @@ smb2_handle_cancelled_close(struct cifs_tcon *tcon, __u64 persistent_fid,
|
|
}
|
|
|
|
int
|
|
-smb2_handle_cancelled_mid(char *buffer, struct TCP_Server_Info *server)
|
|
+smb2_handle_cancelled_mid(struct mid_q_entry *mid, struct TCP_Server_Info *server)
|
|
{
|
|
- struct smb2_sync_hdr *sync_hdr = (struct smb2_sync_hdr *)buffer;
|
|
- struct smb2_create_rsp *rsp = (struct smb2_create_rsp *)buffer;
|
|
+ struct smb2_sync_hdr *sync_hdr = mid->resp_buf;
|
|
+ struct smb2_create_rsp *rsp = mid->resp_buf;
|
|
struct cifs_tcon *tcon;
|
|
int rc;
|
|
|
|
- if (sync_hdr->Command != SMB2_CREATE ||
|
|
+ if ((mid->optype & CIFS_CP_CREATE_CLOSE_OP) || sync_hdr->Command != SMB2_CREATE ||
|
|
sync_hdr->Status != STATUS_SUCCESS)
|
|
return 0;
|
|
|
|
diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
|
|
index f19274857292b..463e81c35c428 100644
|
|
--- a/fs/cifs/smb2ops.c
|
|
+++ b/fs/cifs/smb2ops.c
|
|
@@ -1164,7 +1164,7 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
|
|
struct TCP_Server_Info *server = cifs_pick_channel(ses);
|
|
__le16 *utf16_path = NULL;
|
|
int ea_name_len = strlen(ea_name);
|
|
- int flags = 0;
|
|
+ int flags = CIFS_CP_CREATE_CLOSE_OP;
|
|
int len;
|
|
struct smb_rqst rqst[3];
|
|
int resp_buftype[3];
|
|
@@ -1542,7 +1542,7 @@ smb2_ioctl_query_info(const unsigned int xid,
|
|
struct smb_query_info qi;
|
|
struct smb_query_info __user *pqi;
|
|
int rc = 0;
|
|
- int flags = 0;
|
|
+ int flags = CIFS_CP_CREATE_CLOSE_OP;
|
|
struct smb2_query_info_rsp *qi_rsp = NULL;
|
|
struct smb2_ioctl_rsp *io_rsp = NULL;
|
|
void *buffer = NULL;
|
|
@@ -2516,7 +2516,7 @@ smb2_query_info_compound(const unsigned int xid, struct cifs_tcon *tcon,
|
|
{
|
|
struct cifs_ses *ses = tcon->ses;
|
|
struct TCP_Server_Info *server = cifs_pick_channel(ses);
|
|
- int flags = 0;
|
|
+ int flags = CIFS_CP_CREATE_CLOSE_OP;
|
|
struct smb_rqst rqst[3];
|
|
int resp_buftype[3];
|
|
struct kvec rsp_iov[3];
|
|
@@ -2914,7 +2914,7 @@ smb2_query_symlink(const unsigned int xid, struct cifs_tcon *tcon,
|
|
unsigned int sub_offset;
|
|
unsigned int print_len;
|
|
unsigned int print_offset;
|
|
- int flags = 0;
|
|
+ int flags = CIFS_CP_CREATE_CLOSE_OP;
|
|
struct smb_rqst rqst[3];
|
|
int resp_buftype[3];
|
|
struct kvec rsp_iov[3];
|
|
@@ -3096,7 +3096,7 @@ smb2_query_reparse_tag(const unsigned int xid, struct cifs_tcon *tcon,
|
|
struct cifs_open_parms oparms;
|
|
struct cifs_fid fid;
|
|
struct TCP_Server_Info *server = cifs_pick_channel(tcon->ses);
|
|
- int flags = 0;
|
|
+ int flags = CIFS_CP_CREATE_CLOSE_OP;
|
|
struct smb_rqst rqst[3];
|
|
int resp_buftype[3];
|
|
struct kvec rsp_iov[3];
|
|
diff --git a/fs/cifs/smb2proto.h b/fs/cifs/smb2proto.h
|
|
index 9565e27681a54..a2eb34a8d9c91 100644
|
|
--- a/fs/cifs/smb2proto.h
|
|
+++ b/fs/cifs/smb2proto.h
|
|
@@ -246,8 +246,7 @@ extern int SMB2_oplock_break(const unsigned int xid, struct cifs_tcon *tcon,
|
|
extern int smb2_handle_cancelled_close(struct cifs_tcon *tcon,
|
|
__u64 persistent_fid,
|
|
__u64 volatile_fid);
|
|
-extern int smb2_handle_cancelled_mid(char *buffer,
|
|
- struct TCP_Server_Info *server);
|
|
+extern int smb2_handle_cancelled_mid(struct mid_q_entry *mid, struct TCP_Server_Info *server);
|
|
void smb2_cancelled_close_fid(struct work_struct *work);
|
|
extern int SMB2_QFS_info(const unsigned int xid, struct cifs_tcon *tcon,
|
|
u64 persistent_file_id, u64 volatile_file_id,
|
|
diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
|
|
index 4a2b836eb0177..14ecf1a9f11a3 100644
|
|
--- a/fs/cifs/transport.c
|
|
+++ b/fs/cifs/transport.c
|
|
@@ -101,7 +101,7 @@ static void _cifs_mid_q_entry_release(struct kref *refcount)
|
|
if (midEntry->resp_buf && (midEntry->mid_flags & MID_WAIT_CANCELLED) &&
|
|
midEntry->mid_state == MID_RESPONSE_RECEIVED &&
|
|
server->ops->handle_cancelled_mid)
|
|
- server->ops->handle_cancelled_mid(midEntry->resp_buf, server);
|
|
+ server->ops->handle_cancelled_mid(midEntry, server);
|
|
|
|
midEntry->mid_state = MID_FREE;
|
|
atomic_dec(&midCount);
|
|
diff --git a/fs/configfs/file.c b/fs/configfs/file.c
|
|
index 1f0270229d7b7..da8351d1e4552 100644
|
|
--- a/fs/configfs/file.c
|
|
+++ b/fs/configfs/file.c
|
|
@@ -378,7 +378,7 @@ static int __configfs_open_file(struct inode *inode, struct file *file, int type
|
|
|
|
attr = to_attr(dentry);
|
|
if (!attr)
|
|
- goto out_put_item;
|
|
+ goto out_free_buffer;
|
|
|
|
if (type & CONFIGFS_ITEM_BIN_ATTR) {
|
|
buffer->bin_attr = to_bin_attr(dentry);
|
|
@@ -391,7 +391,7 @@ static int __configfs_open_file(struct inode *inode, struct file *file, int type
|
|
/* Grab the module reference for this attribute if we have one */
|
|
error = -ENODEV;
|
|
if (!try_module_get(buffer->owner))
|
|
- goto out_put_item;
|
|
+ goto out_free_buffer;
|
|
|
|
error = -EACCES;
|
|
if (!buffer->item->ci_type)
|
|
@@ -435,8 +435,6 @@ static int __configfs_open_file(struct inode *inode, struct file *file, int type
|
|
|
|
out_put_module:
|
|
module_put(buffer->owner);
|
|
-out_put_item:
|
|
- config_item_put(buffer->item);
|
|
out_free_buffer:
|
|
up_read(&frag->frag_sem);
|
|
kfree(buffer);
|
|
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
|
|
index 9a6f9875aa349..2ae0af1c88c78 100644
|
|
--- a/fs/ext4/super.c
|
|
+++ b/fs/ext4/super.c
|
|
@@ -4875,7 +4875,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
|
|
|
|
set_task_ioprio(sbi->s_journal->j_task, journal_ioprio);
|
|
|
|
- sbi->s_journal->j_commit_callback = ext4_journal_commit_callback;
|
|
sbi->s_journal->j_submit_inode_data_buffers =
|
|
ext4_journal_submit_inode_data_buffers;
|
|
sbi->s_journal->j_finish_inode_data_buffers =
|
|
@@ -4987,6 +4986,14 @@ no_journal:
|
|
goto failed_mount5;
|
|
}
|
|
|
|
+ /*
|
|
+ * We can only set up the journal commit callback once
|
|
+ * mballoc is initialized
|
|
+ */
|
|
+ if (sbi->s_journal)
|
|
+ sbi->s_journal->j_commit_callback =
|
|
+ ext4_journal_commit_callback;
|
|
+
|
|
block = ext4_count_free_clusters(sb);
|
|
ext4_free_blocks_count_set(sbi->s_es,
|
|
EXT4_C2B(sbi, block));
|
|
diff --git a/fs/io_uring.c b/fs/io_uring.c
|
|
index 241313278e5a5..00ef0b90d1491 100644
|
|
--- a/fs/io_uring.c
|
|
+++ b/fs/io_uring.c
|
|
@@ -8891,7 +8891,8 @@ static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
|
|
}
|
|
|
|
/* SQPOLL thread does its own polling */
|
|
- if (!(ctx->flags & IORING_SETUP_SQPOLL) && !files) {
|
|
+ if ((!(ctx->flags & IORING_SETUP_SQPOLL) && !files) ||
|
|
+ (ctx->sq_data && ctx->sq_data->thread == current)) {
|
|
while (!list_empty_careful(&ctx->iopoll_list)) {
|
|
io_iopoll_try_reap_events(ctx);
|
|
ret = true;
|
|
diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
|
|
index ef827ae193d22..4db3018776f68 100644
|
|
--- a/fs/nfs/dir.c
|
|
+++ b/fs/nfs/dir.c
|
|
@@ -1401,6 +1401,15 @@ out_force:
|
|
goto out;
|
|
}
|
|
|
|
+static void nfs_mark_dir_for_revalidate(struct inode *inode)
|
|
+{
|
|
+ struct nfs_inode *nfsi = NFS_I(inode);
|
|
+
|
|
+ spin_lock(&inode->i_lock);
|
|
+ nfsi->cache_validity |= NFS_INO_REVAL_PAGECACHE;
|
|
+ spin_unlock(&inode->i_lock);
|
|
+}
|
|
+
|
|
/*
|
|
* We judge how long we want to trust negative
|
|
* dentries by looking at the parent inode mtime.
|
|
@@ -1435,19 +1444,14 @@ nfs_lookup_revalidate_done(struct inode *dir, struct dentry *dentry,
|
|
__func__, dentry);
|
|
return 1;
|
|
case 0:
|
|
- nfs_mark_for_revalidate(dir);
|
|
- if (inode && S_ISDIR(inode->i_mode)) {
|
|
- /* Purge readdir caches. */
|
|
- nfs_zap_caches(inode);
|
|
- /*
|
|
- * We can't d_drop the root of a disconnected tree:
|
|
- * its d_hash is on the s_anon list and d_drop() would hide
|
|
- * it from shrink_dcache_for_unmount(), leading to busy
|
|
- * inodes on unmount and further oopses.
|
|
- */
|
|
- if (IS_ROOT(dentry))
|
|
- return 1;
|
|
- }
|
|
+ /*
|
|
+ * We can't d_drop the root of a disconnected tree:
|
|
+ * its d_hash is on the s_anon list and d_drop() would hide
|
|
+ * it from shrink_dcache_for_unmount(), leading to busy
|
|
+ * inodes on unmount and further oopses.
|
|
+ */
|
|
+ if (inode && IS_ROOT(dentry))
|
|
+ return 1;
|
|
dfprintk(LOOKUPCACHE, "NFS: %s(%pd2) is invalid\n",
|
|
__func__, dentry);
|
|
return 0;
|
|
@@ -1525,6 +1529,13 @@ out:
|
|
nfs_free_fattr(fattr);
|
|
nfs_free_fhandle(fhandle);
|
|
nfs4_label_free(label);
|
|
+
|
|
+ /*
|
|
+ * If the lookup failed despite the dentry change attribute being
|
|
+ * a match, then we should revalidate the directory cache.
|
|
+ */
|
|
+ if (!ret && nfs_verify_change_attribute(dir, dentry->d_time))
|
|
+ nfs_mark_dir_for_revalidate(dir);
|
|
return nfs_lookup_revalidate_done(dir, dentry, inode, ret);
|
|
}
|
|
|
|
@@ -1567,7 +1578,7 @@ nfs_do_lookup_revalidate(struct inode *dir, struct dentry *dentry,
|
|
error = nfs_lookup_verify_inode(inode, flags);
|
|
if (error) {
|
|
if (error == -ESTALE)
|
|
- nfs_zap_caches(dir);
|
|
+ nfs_mark_dir_for_revalidate(dir);
|
|
goto out_bad;
|
|
}
|
|
nfs_advise_use_readdirplus(dir);
|
|
@@ -2064,7 +2075,6 @@ out:
|
|
dput(parent);
|
|
return d;
|
|
out_error:
|
|
- nfs_mark_for_revalidate(dir);
|
|
d = ERR_PTR(error);
|
|
goto out;
|
|
}
|
|
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
|
|
index fc8bbfd9beb36..7eb44f37558cb 100644
|
|
--- a/fs/nfs/nfs4proc.c
|
|
+++ b/fs/nfs/nfs4proc.c
|
|
@@ -5972,7 +5972,7 @@ static int _nfs4_get_security_label(struct inode *inode, void *buf,
|
|
return ret;
|
|
if (!(fattr.valid & NFS_ATTR_FATTR_V4_SECURITY_LABEL))
|
|
return -ENOENT;
|
|
- return 0;
|
|
+ return label.len;
|
|
}
|
|
|
|
static int nfs4_get_security_label(struct inode *inode, void *buf,
|
|
diff --git a/fs/pnode.h b/fs/pnode.h
|
|
index 26f74e092bd98..988f1aa9b02ae 100644
|
|
--- a/fs/pnode.h
|
|
+++ b/fs/pnode.h
|
|
@@ -12,7 +12,7 @@
|
|
|
|
#define IS_MNT_SHARED(m) ((m)->mnt.mnt_flags & MNT_SHARED)
|
|
#define IS_MNT_SLAVE(m) ((m)->mnt_master)
|
|
-#define IS_MNT_NEW(m) (!(m)->mnt_ns)
|
|
+#define IS_MNT_NEW(m) (!(m)->mnt_ns || is_anon_ns((m)->mnt_ns))
|
|
#define CLEAR_MNT_SHARED(m) ((m)->mnt.mnt_flags &= ~MNT_SHARED)
|
|
#define IS_MNT_UNBINDABLE(m) ((m)->mnt.mnt_flags & MNT_UNBINDABLE)
|
|
#define IS_MNT_MARKED(m) ((m)->mnt.mnt_flags & MNT_MARKED)
|
|
diff --git a/fs/udf/inode.c b/fs/udf/inode.c
|
|
index bb89c3e43212b..0dd2f93ac0480 100644
|
|
--- a/fs/udf/inode.c
|
|
+++ b/fs/udf/inode.c
|
|
@@ -544,11 +544,14 @@ static int udf_do_extend_file(struct inode *inode,
|
|
|
|
udf_write_aext(inode, last_pos, &last_ext->extLocation,
|
|
last_ext->extLength, 1);
|
|
+
|
|
/*
|
|
- * We've rewritten the last extent but there may be empty
|
|
- * indirect extent after it - enter it.
|
|
+ * We've rewritten the last extent. If we are going to add
|
|
+ * more extents, we may need to enter possible following
|
|
+ * empty indirect extent.
|
|
*/
|
|
- udf_next_aext(inode, last_pos, &tmploc, &tmplen, 0);
|
|
+ if (new_block_bytes || prealloc_len)
|
|
+ udf_next_aext(inode, last_pos, &tmploc, &tmplen, 0);
|
|
}
|
|
|
|
/* Managed to do everything necessary? */
|
|
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
|
|
index 053bf05fb1f76..b20568c440013 100644
|
|
--- a/include/linux/acpi.h
|
|
+++ b/include/linux/acpi.h
|
|
@@ -1072,19 +1072,25 @@ void __acpi_handle_debug(struct _ddebug *descriptor, acpi_handle handle, const c
|
|
#if defined(CONFIG_ACPI) && defined(CONFIG_GPIOLIB)
|
|
bool acpi_gpio_get_irq_resource(struct acpi_resource *ares,
|
|
struct acpi_resource_gpio **agpio);
|
|
-int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index);
|
|
+int acpi_dev_gpio_irq_get_by(struct acpi_device *adev, const char *name, int index);
|
|
#else
|
|
static inline bool acpi_gpio_get_irq_resource(struct acpi_resource *ares,
|
|
struct acpi_resource_gpio **agpio)
|
|
{
|
|
return false;
|
|
}
|
|
-static inline int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
|
|
+static inline int acpi_dev_gpio_irq_get_by(struct acpi_device *adev,
|
|
+ const char *name, int index)
|
|
{
|
|
return -ENXIO;
|
|
}
|
|
#endif
|
|
|
|
+static inline int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
|
|
+{
|
|
+ return acpi_dev_gpio_irq_get_by(adev, NULL, index);
|
|
+}
|
|
+
|
|
/* Device properties */
|
|
|
|
#ifdef CONFIG_ACPI
|
|
diff --git a/include/linux/can/skb.h b/include/linux/can/skb.h
|
|
index fc61cf4eff1c9..ce7393d397e18 100644
|
|
--- a/include/linux/can/skb.h
|
|
+++ b/include/linux/can/skb.h
|
|
@@ -49,8 +49,12 @@ static inline void can_skb_reserve(struct sk_buff *skb)
|
|
|
|
static inline void can_skb_set_owner(struct sk_buff *skb, struct sock *sk)
|
|
{
|
|
- if (sk) {
|
|
- sock_hold(sk);
|
|
+ /* If the socket has already been closed by user space, the
|
|
+ * refcount may already be 0 (and the socket will be freed
|
|
+ * after the last TX skb has been freed). So only increase
|
|
+ * socket refcount if the refcount is > 0.
|
|
+ */
|
|
+ if (sk && refcount_inc_not_zero(&sk->sk_refcnt)) {
|
|
skb->destructor = sock_efree;
|
|
skb->sk = sk;
|
|
}
|
|
diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
|
|
index 98cff1b4b088c..189149de77a9d 100644
|
|
--- a/include/linux/compiler-clang.h
|
|
+++ b/include/linux/compiler-clang.h
|
|
@@ -41,6 +41,12 @@
|
|
#define __no_sanitize_thread
|
|
#endif
|
|
|
|
+#if defined(CONFIG_ARCH_USE_BUILTIN_BSWAP)
|
|
+#define __HAVE_BUILTIN_BSWAP32__
|
|
+#define __HAVE_BUILTIN_BSWAP64__
|
|
+#define __HAVE_BUILTIN_BSWAP16__
|
|
+#endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
|
|
+
|
|
#if __has_feature(undefined_behavior_sanitizer)
|
|
/* GCC does not have __SANITIZE_UNDEFINED__ */
|
|
#define __no_sanitize_undefined \
|
|
diff --git a/include/linux/gpio/consumer.h b/include/linux/gpio/consumer.h
|
|
index ef49307611d21..c73b25bc92134 100644
|
|
--- a/include/linux/gpio/consumer.h
|
|
+++ b/include/linux/gpio/consumer.h
|
|
@@ -674,6 +674,8 @@ struct acpi_gpio_mapping {
|
|
* get GpioIo type explicitly, this quirk may be used.
|
|
*/
|
|
#define ACPI_GPIO_QUIRK_ONLY_GPIOIO BIT(1)
|
|
+/* Use given pin as an absolute GPIO number in the system */
|
|
+#define ACPI_GPIO_QUIRK_ABSOLUTE_NUMBER BIT(2)
|
|
|
|
unsigned int quirks;
|
|
};
|
|
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
|
|
index b93c44b9121ec..7643d2dfa9594 100644
|
|
--- a/include/linux/memblock.h
|
|
+++ b/include/linux/memblock.h
|
|
@@ -460,7 +460,7 @@ static inline void memblock_free_late(phys_addr_t base, phys_addr_t size)
|
|
/*
|
|
* Set the allocation direction to bottom-up or top-down.
|
|
*/
|
|
-static inline void memblock_set_bottom_up(bool enable)
|
|
+static inline __init void memblock_set_bottom_up(bool enable)
|
|
{
|
|
memblock.bottom_up = enable;
|
|
}
|
|
@@ -470,7 +470,7 @@ static inline void memblock_set_bottom_up(bool enable)
|
|
* if this is true, that said, memblock will allocate memory
|
|
* in bottom-up direction.
|
|
*/
|
|
-static inline bool memblock_bottom_up(void)
|
|
+static inline __init bool memblock_bottom_up(void)
|
|
{
|
|
return memblock.bottom_up;
|
|
}
|
|
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
|
|
index eeb0b52203e92..3e1a43c9f6641 100644
|
|
--- a/include/linux/memcontrol.h
|
|
+++ b/include/linux/memcontrol.h
|
|
@@ -1072,9 +1072,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm,
|
|
rcu_read_unlock();
|
|
}
|
|
|
|
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
|
-void mem_cgroup_split_huge_fixup(struct page *head);
|
|
-#endif
|
|
+void split_page_memcg(struct page *head, unsigned int nr);
|
|
|
|
#else /* CONFIG_MEMCG */
|
|
|
|
@@ -1416,7 +1414,7 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order,
|
|
return 0;
|
|
}
|
|
|
|
-static inline void mem_cgroup_split_huge_fixup(struct page *head)
|
|
+static inline void split_page_memcg(struct page *head, unsigned int nr)
|
|
{
|
|
}
|
|
|
|
diff --git a/include/linux/memory.h b/include/linux/memory.h
|
|
index 439a89e758d87..4da95e684e20f 100644
|
|
--- a/include/linux/memory.h
|
|
+++ b/include/linux/memory.h
|
|
@@ -27,9 +27,8 @@ struct memory_block {
|
|
unsigned long start_section_nr;
|
|
unsigned long state; /* serialized by the dev->lock */
|
|
int online_type; /* for passing data to online routine */
|
|
- int phys_device; /* to which fru does this belong? */
|
|
- struct device dev;
|
|
int nid; /* NID for this memory block */
|
|
+ struct device dev;
|
|
};
|
|
|
|
int arch_get_memory_phys_device(unsigned long start_pfn);
|
|
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
|
|
index 9a38f579bc764..419a4d77de000 100644
|
|
--- a/include/linux/perf_event.h
|
|
+++ b/include/linux/perf_event.h
|
|
@@ -606,6 +606,7 @@ struct swevent_hlist {
|
|
#define PERF_ATTACH_TASK 0x04
|
|
#define PERF_ATTACH_TASK_DATA 0x08
|
|
#define PERF_ATTACH_ITRACE 0x10
|
|
+#define PERF_ATTACH_SCHED_CB 0x20
|
|
|
|
struct perf_cgroup;
|
|
struct perf_buffer;
|
|
@@ -872,6 +873,7 @@ struct perf_cpu_context {
|
|
struct list_head cgrp_cpuctx_entry;
|
|
#endif
|
|
|
|
+ struct list_head sched_cb_entry;
|
|
int sched_cb_usage;
|
|
|
|
int online;
|
|
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
|
|
index 8fcdfa52eb4be..dad92f9e4eac8 100644
|
|
--- a/include/linux/pgtable.h
|
|
+++ b/include/linux/pgtable.h
|
|
@@ -912,6 +912,10 @@ static inline void ptep_modify_prot_commit(struct vm_area_struct *vma,
|
|
#define pgprot_device pgprot_noncached
|
|
#endif
|
|
|
|
+#ifndef pgprot_mhp
|
|
+#define pgprot_mhp(prot) (prot)
|
|
+#endif
|
|
+
|
|
#ifdef CONFIG_MMU
|
|
#ifndef pgprot_modify
|
|
#define pgprot_modify pgprot_modify
|
|
diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
|
|
index 1ae08b8462a41..90b2a0bce11ca 100644
|
|
--- a/include/linux/sched/mm.h
|
|
+++ b/include/linux/sched/mm.h
|
|
@@ -140,7 +140,8 @@ static inline bool in_vfork(struct task_struct *tsk)
|
|
* another oom-unkillable task does this it should blame itself.
|
|
*/
|
|
rcu_read_lock();
|
|
- ret = tsk->vfork_done && tsk->real_parent->mm == tsk->mm;
|
|
+ ret = tsk->vfork_done &&
|
|
+ rcu_dereference(tsk->real_parent)->mm == tsk->mm;
|
|
rcu_read_unlock();
|
|
|
|
return ret;
|
|
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
|
|
index 2f7bb92b4c9ee..f61e34fbaaea4 100644
|
|
--- a/include/linux/seqlock.h
|
|
+++ b/include/linux/seqlock.h
|
|
@@ -664,10 +664,7 @@ typedef struct {
|
|
* seqcount_latch_init() - runtime initializer for seqcount_latch_t
|
|
* @s: Pointer to the seqcount_latch_t instance
|
|
*/
|
|
-static inline void seqcount_latch_init(seqcount_latch_t *s)
|
|
-{
|
|
- seqcount_init(&s->seqcount);
|
|
-}
|
|
+#define seqcount_latch_init(s) seqcount_init(&(s)->seqcount)
|
|
|
|
/**
|
|
* raw_read_seqcount_latch() - pick even/odd latch data copy
|
|
diff --git a/include/linux/stop_machine.h b/include/linux/stop_machine.h
|
|
index 30577c3aecf81..46fb3ebdd16e4 100644
|
|
--- a/include/linux/stop_machine.h
|
|
+++ b/include/linux/stop_machine.h
|
|
@@ -128,7 +128,7 @@ int stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data,
|
|
const struct cpumask *cpus);
|
|
#else /* CONFIG_SMP || CONFIG_HOTPLUG_CPU */
|
|
|
|
-static inline int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data,
|
|
+static __always_inline int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data,
|
|
const struct cpumask *cpus)
|
|
{
|
|
unsigned long flags;
|
|
@@ -139,14 +139,15 @@ static inline int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data,
|
|
return ret;
|
|
}
|
|
|
|
-static inline int stop_machine(cpu_stop_fn_t fn, void *data,
|
|
- const struct cpumask *cpus)
|
|
+static __always_inline int
|
|
+stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus)
|
|
{
|
|
return stop_machine_cpuslocked(fn, data, cpus);
|
|
}
|
|
|
|
-static inline int stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data,
|
|
- const struct cpumask *cpus)
|
|
+static __always_inline int
|
|
+stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data,
|
|
+ const struct cpumask *cpus)
|
|
{
|
|
return stop_machine(fn, data, cpus);
|
|
}
|
|
diff --git a/include/linux/textsearch.h b/include/linux/textsearch.h
|
|
index 13770cfe33ad8..6673e4d4ac2e1 100644
|
|
--- a/include/linux/textsearch.h
|
|
+++ b/include/linux/textsearch.h
|
|
@@ -23,7 +23,7 @@ struct ts_config;
|
|
struct ts_state
|
|
{
|
|
unsigned int offset;
|
|
- char cb[40];
|
|
+ char cb[48];
|
|
};
|
|
|
|
/**
|
|
diff --git a/include/linux/usb.h b/include/linux/usb.h
|
|
index 7d72c4e0713c1..d6a41841b93e4 100644
|
|
--- a/include/linux/usb.h
|
|
+++ b/include/linux/usb.h
|
|
@@ -746,6 +746,8 @@ extern int usb_lock_device_for_reset(struct usb_device *udev,
|
|
extern int usb_reset_device(struct usb_device *dev);
|
|
extern void usb_queue_reset_device(struct usb_interface *dev);
|
|
|
|
+extern struct device *usb_intf_get_dma_device(struct usb_interface *intf);
|
|
+
|
|
#ifdef CONFIG_ACPI
|
|
extern int usb_acpi_set_power_state(struct usb_device *hdev, int index,
|
|
bool enable);
|
|
diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
|
|
index e8a924eeea3d0..6b5fcfa1e5553 100644
|
|
--- a/include/linux/virtio_net.h
|
|
+++ b/include/linux/virtio_net.h
|
|
@@ -79,8 +79,13 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
|
|
if (gso_type && skb->network_header) {
|
|
struct flow_keys_basic keys;
|
|
|
|
- if (!skb->protocol)
|
|
+ if (!skb->protocol) {
|
|
+ __be16 protocol = dev_parse_header_protocol(skb);
|
|
+
|
|
virtio_net_hdr_set_proto(skb, hdr);
|
|
+ if (protocol && protocol != skb->protocol)
|
|
+ return -EINVAL;
|
|
+ }
|
|
retry:
|
|
if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys,
|
|
NULL, 0, 0, 0,
|
|
diff --git a/include/media/rc-map.h b/include/media/rc-map.h
|
|
index 999b750bc6b88..30f138ebab6f0 100644
|
|
--- a/include/media/rc-map.h
|
|
+++ b/include/media/rc-map.h
|
|
@@ -175,6 +175,13 @@ struct rc_map_list {
|
|
struct rc_map map;
|
|
};
|
|
|
|
+#ifdef CONFIG_MEDIA_CEC_RC
|
|
+/*
|
|
+ * rc_map_list from rc-cec.c
|
|
+ */
|
|
+extern struct rc_map_list cec_map;
|
|
+#endif
|
|
+
|
|
/* Routines from rc-map.c */
|
|
|
|
/**
|
|
diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
|
|
index 6336780d83a75..ce2fba49c95da 100644
|
|
--- a/include/target/target_core_backend.h
|
|
+++ b/include/target/target_core_backend.h
|
|
@@ -72,6 +72,7 @@ int transport_backend_register(const struct target_backend_ops *);
|
|
void target_backend_unregister(const struct target_backend_ops *);
|
|
|
|
void target_complete_cmd(struct se_cmd *, u8);
|
|
+void target_set_cmd_data_length(struct se_cmd *, int);
|
|
void target_complete_cmd_with_length(struct se_cmd *, u8, int);
|
|
|
|
void transport_copy_sense_to_cmd(struct se_cmd *, unsigned char *);
|
|
diff --git a/include/uapi/linux/l2tp.h b/include/uapi/linux/l2tp.h
|
|
index 30c80d5ba4bfc..bab8c97086111 100644
|
|
--- a/include/uapi/linux/l2tp.h
|
|
+++ b/include/uapi/linux/l2tp.h
|
|
@@ -145,6 +145,7 @@ enum {
|
|
L2TP_ATTR_RX_ERRORS, /* u64 */
|
|
L2TP_ATTR_STATS_PAD,
|
|
L2TP_ATTR_RX_COOKIE_DISCARDS, /* u64 */
|
|
+ L2TP_ATTR_RX_INVALID, /* u64 */
|
|
__L2TP_ATTR_STATS_MAX,
|
|
};
|
|
|
|
diff --git a/include/uapi/linux/netfilter/nfnetlink_cthelper.h b/include/uapi/linux/netfilter/nfnetlink_cthelper.h
|
|
index a13137afc4299..70af02092d16e 100644
|
|
--- a/include/uapi/linux/netfilter/nfnetlink_cthelper.h
|
|
+++ b/include/uapi/linux/netfilter/nfnetlink_cthelper.h
|
|
@@ -5,7 +5,7 @@
|
|
#define NFCT_HELPER_STATUS_DISABLED 0
|
|
#define NFCT_HELPER_STATUS_ENABLED 1
|
|
|
|
-enum nfnl_acct_msg_types {
|
|
+enum nfnl_cthelper_msg_types {
|
|
NFNL_MSG_CTHELPER_NEW,
|
|
NFNL_MSG_CTHELPER_GET,
|
|
NFNL_MSG_CTHELPER_DEL,
|
|
diff --git a/kernel/events/core.c b/kernel/events/core.c
|
|
index 55d18791a72de..8425dbc1d239e 100644
|
|
--- a/kernel/events/core.c
|
|
+++ b/kernel/events/core.c
|
|
@@ -385,6 +385,7 @@ static DEFINE_MUTEX(perf_sched_mutex);
|
|
static atomic_t perf_sched_count;
|
|
|
|
static DEFINE_PER_CPU(atomic_t, perf_cgroup_events);
|
|
+static DEFINE_PER_CPU(int, perf_sched_cb_usages);
|
|
static DEFINE_PER_CPU(struct pmu_event_list, pmu_sb_events);
|
|
|
|
static atomic_t nr_mmap_events __read_mostly;
|
|
@@ -3474,11 +3475,16 @@ unlock:
|
|
}
|
|
}
|
|
|
|
+static DEFINE_PER_CPU(struct list_head, sched_cb_list);
|
|
+
|
|
void perf_sched_cb_dec(struct pmu *pmu)
|
|
{
|
|
struct perf_cpu_context *cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
|
|
|
|
- --cpuctx->sched_cb_usage;
|
|
+ this_cpu_dec(perf_sched_cb_usages);
|
|
+
|
|
+ if (!--cpuctx->sched_cb_usage)
|
|
+ list_del(&cpuctx->sched_cb_entry);
|
|
}
|
|
|
|
|
|
@@ -3486,7 +3492,10 @@ void perf_sched_cb_inc(struct pmu *pmu)
|
|
{
|
|
struct perf_cpu_context *cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
|
|
|
|
- cpuctx->sched_cb_usage++;
|
|
+ if (!cpuctx->sched_cb_usage++)
|
|
+ list_add(&cpuctx->sched_cb_entry, this_cpu_ptr(&sched_cb_list));
|
|
+
|
|
+ this_cpu_inc(perf_sched_cb_usages);
|
|
}
|
|
|
|
/*
|
|
@@ -3515,6 +3524,24 @@ static void __perf_pmu_sched_task(struct perf_cpu_context *cpuctx, bool sched_in
|
|
perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
|
|
}
|
|
|
|
+static void perf_pmu_sched_task(struct task_struct *prev,
|
|
+ struct task_struct *next,
|
|
+ bool sched_in)
|
|
+{
|
|
+ struct perf_cpu_context *cpuctx;
|
|
+
|
|
+ if (prev == next)
|
|
+ return;
|
|
+
|
|
+ list_for_each_entry(cpuctx, this_cpu_ptr(&sched_cb_list), sched_cb_entry) {
|
|
+ /* will be handled in perf_event_context_sched_in/out */
|
|
+ if (cpuctx->task_ctx)
|
|
+ continue;
|
|
+
|
|
+ __perf_pmu_sched_task(cpuctx, sched_in);
|
|
+ }
|
|
+}
|
|
+
|
|
static void perf_event_switch(struct task_struct *task,
|
|
struct task_struct *next_prev, bool sched_in);
|
|
|
|
@@ -3537,6 +3564,9 @@ void __perf_event_task_sched_out(struct task_struct *task,
|
|
{
|
|
int ctxn;
|
|
|
|
+ if (__this_cpu_read(perf_sched_cb_usages))
|
|
+ perf_pmu_sched_task(task, next, false);
|
|
+
|
|
if (atomic_read(&nr_switch_events))
|
|
perf_event_switch(task, next, false);
|
|
|
|
@@ -3845,6 +3875,9 @@ void __perf_event_task_sched_in(struct task_struct *prev,
|
|
|
|
if (atomic_read(&nr_switch_events))
|
|
perf_event_switch(task, prev, true);
|
|
+
|
|
+ if (__this_cpu_read(perf_sched_cb_usages))
|
|
+ perf_pmu_sched_task(prev, task, true);
|
|
}
|
|
|
|
static u64 perf_calculate_period(struct perf_event *event, u64 nsec, u64 count)
|
|
@@ -4669,7 +4702,7 @@ static void unaccount_event(struct perf_event *event)
|
|
if (event->parent)
|
|
return;
|
|
|
|
- if (event->attach_state & PERF_ATTACH_TASK)
|
|
+ if (event->attach_state & (PERF_ATTACH_TASK | PERF_ATTACH_SCHED_CB))
|
|
dec = true;
|
|
if (event->attr.mmap || event->attr.mmap_data)
|
|
atomic_dec(&nr_mmap_events);
|
|
@@ -11168,7 +11201,7 @@ static void account_event(struct perf_event *event)
|
|
if (event->parent)
|
|
return;
|
|
|
|
- if (event->attach_state & PERF_ATTACH_TASK)
|
|
+ if (event->attach_state & (PERF_ATTACH_TASK | PERF_ATTACH_SCHED_CB))
|
|
inc = true;
|
|
if (event->attr.mmap || event->attr.mmap_data)
|
|
atomic_inc(&nr_mmap_events);
|
|
@@ -12960,6 +12993,7 @@ static void __init perf_event_init_all_cpus(void)
|
|
#ifdef CONFIG_CGROUP_PERF
|
|
INIT_LIST_HEAD(&per_cpu(cgrp_cpuctx_list, cpu));
|
|
#endif
|
|
+ INIT_LIST_HEAD(&per_cpu(sched_cb_list, cpu));
|
|
}
|
|
}
|
|
|
|
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
|
|
index fa1f83083a58b..f0056507a373d 100644
|
|
--- a/kernel/sched/core.c
|
|
+++ b/kernel/sched/core.c
|
|
@@ -1862,8 +1862,13 @@ struct migration_arg {
|
|
struct set_affinity_pending *pending;
|
|
};
|
|
|
|
+/*
|
|
+ * @refs: number of wait_for_completion()
|
|
+ * @stop_pending: is @stop_work in use
|
|
+ */
|
|
struct set_affinity_pending {
|
|
refcount_t refs;
|
|
+ unsigned int stop_pending;
|
|
struct completion done;
|
|
struct cpu_stop_work stop_work;
|
|
struct migration_arg arg;
|
|
@@ -1898,8 +1903,8 @@ static struct rq *__migrate_task(struct rq *rq, struct rq_flags *rf,
|
|
*/
|
|
static int migration_cpu_stop(void *data)
|
|
{
|
|
- struct set_affinity_pending *pending;
|
|
struct migration_arg *arg = data;
|
|
+ struct set_affinity_pending *pending = arg->pending;
|
|
struct task_struct *p = arg->task;
|
|
int dest_cpu = arg->dest_cpu;
|
|
struct rq *rq = this_rq();
|
|
@@ -1921,7 +1926,6 @@ static int migration_cpu_stop(void *data)
|
|
raw_spin_lock(&p->pi_lock);
|
|
rq_lock(rq, &rf);
|
|
|
|
- pending = p->migration_pending;
|
|
/*
|
|
* If task_rq(p) != rq, it cannot be migrated here, because we're
|
|
* holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
|
|
@@ -1932,21 +1936,14 @@ static int migration_cpu_stop(void *data)
|
|
goto out;
|
|
|
|
if (pending) {
|
|
- p->migration_pending = NULL;
|
|
+ if (p->migration_pending == pending)
|
|
+ p->migration_pending = NULL;
|
|
complete = true;
|
|
}
|
|
|
|
- /* migrate_enable() -- we must not race against SCA */
|
|
if (dest_cpu < 0) {
|
|
- /*
|
|
- * When this was migrate_enable() but we no longer
|
|
- * have a @pending, a concurrent SCA 'fixed' things
|
|
- * and we should be valid again. Nothing to do.
|
|
- */
|
|
- if (!pending) {
|
|
- WARN_ON_ONCE(!cpumask_test_cpu(task_cpu(p), &p->cpus_mask));
|
|
+ if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask))
|
|
goto out;
|
|
- }
|
|
|
|
dest_cpu = cpumask_any_distribute(&p->cpus_mask);
|
|
}
|
|
@@ -1956,7 +1953,14 @@ static int migration_cpu_stop(void *data)
|
|
else
|
|
p->wake_cpu = dest_cpu;
|
|
|
|
- } else if (dest_cpu < 0 || pending) {
|
|
+ /*
|
|
+ * XXX __migrate_task() can fail, at which point we might end
|
|
+ * up running on a dodgy CPU, AFAICT this can only happen
|
|
+ * during CPU hotplug, at which point we'll get pushed out
|
|
+ * anyway, so it's probably not a big deal.
|
|
+ */
|
|
+
|
|
+ } else if (pending) {
|
|
/*
|
|
* This happens when we get migrated between migrate_enable()'s
|
|
* preempt_enable() and scheduling the stopper task. At that
|
|
@@ -1971,43 +1975,32 @@ static int migration_cpu_stop(void *data)
|
|
* ->pi_lock, so the allowed mask is stable - if it got
|
|
* somewhere allowed, we're done.
|
|
*/
|
|
- if (pending && cpumask_test_cpu(task_cpu(p), p->cpus_ptr)) {
|
|
- p->migration_pending = NULL;
|
|
+ if (cpumask_test_cpu(task_cpu(p), p->cpus_ptr)) {
|
|
+ if (p->migration_pending == pending)
|
|
+ p->migration_pending = NULL;
|
|
complete = true;
|
|
goto out;
|
|
}
|
|
|
|
- /*
|
|
- * When this was migrate_enable() but we no longer have an
|
|
- * @pending, a concurrent SCA 'fixed' things and we should be
|
|
- * valid again. Nothing to do.
|
|
- */
|
|
- if (!pending) {
|
|
- WARN_ON_ONCE(!cpumask_test_cpu(task_cpu(p), &p->cpus_mask));
|
|
- goto out;
|
|
- }
|
|
-
|
|
/*
|
|
* When migrate_enable() hits a rq mis-match we can't reliably
|
|
* determine is_migration_disabled() and so have to chase after
|
|
* it.
|
|
*/
|
|
+ WARN_ON_ONCE(!pending->stop_pending);
|
|
task_rq_unlock(rq, p, &rf);
|
|
stop_one_cpu_nowait(task_cpu(p), migration_cpu_stop,
|
|
&pending->arg, &pending->stop_work);
|
|
return 0;
|
|
}
|
|
out:
|
|
+ if (pending)
|
|
+ pending->stop_pending = false;
|
|
task_rq_unlock(rq, p, &rf);
|
|
|
|
if (complete)
|
|
complete_all(&pending->done);
|
|
|
|
- /* For pending->{arg,stop_work} */
|
|
- pending = arg->pending;
|
|
- if (pending && refcount_dec_and_test(&pending->refs))
|
|
- wake_up_var(&pending->refs);
|
|
-
|
|
return 0;
|
|
}
|
|
|
|
@@ -2194,11 +2187,7 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
|
|
int dest_cpu, unsigned int flags)
|
|
{
|
|
struct set_affinity_pending my_pending = { }, *pending = NULL;
|
|
- struct migration_arg arg = {
|
|
- .task = p,
|
|
- .dest_cpu = dest_cpu,
|
|
- };
|
|
- bool complete = false;
|
|
+ bool stop_pending, complete = false;
|
|
|
|
/* Can the task run on the task's current CPU? If so, we're done */
|
|
if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
|
|
@@ -2210,12 +2199,16 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
|
|
push_task = get_task_struct(p);
|
|
}
|
|
|
|
+ /*
|
|
+ * If there are pending waiters, but no pending stop_work,
|
|
+ * then complete now.
|
|
+ */
|
|
pending = p->migration_pending;
|
|
- if (pending) {
|
|
- refcount_inc(&pending->refs);
|
|
+ if (pending && !pending->stop_pending) {
|
|
p->migration_pending = NULL;
|
|
complete = true;
|
|
}
|
|
+
|
|
task_rq_unlock(rq, p, rf);
|
|
|
|
if (push_task) {
|
|
@@ -2224,7 +2217,7 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
|
|
}
|
|
|
|
if (complete)
|
|
- goto do_complete;
|
|
+ complete_all(&pending->done);
|
|
|
|
return 0;
|
|
}
|
|
@@ -2235,6 +2228,12 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
|
|
/* Install the request */
|
|
refcount_set(&my_pending.refs, 1);
|
|
init_completion(&my_pending.done);
|
|
+ my_pending.arg = (struct migration_arg) {
|
|
+ .task = p,
|
|
+ .dest_cpu = -1, /* any */
|
|
+ .pending = &my_pending,
|
|
+ };
|
|
+
|
|
p->migration_pending = &my_pending;
|
|
} else {
|
|
pending = p->migration_pending;
|
|
@@ -2259,45 +2258,41 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
|
|
return -EINVAL;
|
|
}
|
|
|
|
- if (flags & SCA_MIGRATE_ENABLE) {
|
|
-
|
|
- refcount_inc(&pending->refs); /* pending->{arg,stop_work} */
|
|
- p->migration_flags &= ~MDF_PUSH;
|
|
- task_rq_unlock(rq, p, rf);
|
|
-
|
|
- pending->arg = (struct migration_arg) {
|
|
- .task = p,
|
|
- .dest_cpu = -1,
|
|
- .pending = pending,
|
|
- };
|
|
-
|
|
- stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
|
|
- &pending->arg, &pending->stop_work);
|
|
-
|
|
- return 0;
|
|
- }
|
|
-
|
|
if (task_running(rq, p) || p->state == TASK_WAKING) {
|
|
/*
|
|
- * Lessen races (and headaches) by delegating
|
|
- * is_migration_disabled(p) checks to the stopper, which will
|
|
- * run on the same CPU as said p.
|
|
+ * MIGRATE_ENABLE gets here because 'p == current', but for
|
|
+ * anything else we cannot do is_migration_disabled(), punt
|
|
+ * and have the stopper function handle it all race-free.
|
|
*/
|
|
+ stop_pending = pending->stop_pending;
|
|
+ if (!stop_pending)
|
|
+ pending->stop_pending = true;
|
|
+
|
|
+ if (flags & SCA_MIGRATE_ENABLE)
|
|
+ p->migration_flags &= ~MDF_PUSH;
|
|
+
|
|
task_rq_unlock(rq, p, rf);
|
|
- stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
|
|
|
|
+ if (!stop_pending) {
|
|
+ stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
|
|
+ &pending->arg, &pending->stop_work);
|
|
+ }
|
|
+
|
|
+ if (flags & SCA_MIGRATE_ENABLE)
|
|
+ return 0;
|
|
} else {
|
|
|
|
if (!is_migration_disabled(p)) {
|
|
if (task_on_rq_queued(p))
|
|
rq = move_queued_task(rq, rf, p, dest_cpu);
|
|
|
|
- p->migration_pending = NULL;
|
|
- complete = true;
|
|
+ if (!pending->stop_pending) {
|
|
+ p->migration_pending = NULL;
|
|
+ complete = true;
|
|
+ }
|
|
}
|
|
task_rq_unlock(rq, p, rf);
|
|
|
|
-do_complete:
|
|
if (complete)
|
|
complete_all(&pending->done);
|
|
}
|
|
@@ -2305,7 +2300,7 @@ do_complete:
|
|
wait_for_completion(&pending->done);
|
|
|
|
if (refcount_dec_and_test(&pending->refs))
|
|
- wake_up_var(&pending->refs);
|
|
+ wake_up_var(&pending->refs); /* No UaF, just an address */
|
|
|
|
/*
|
|
* Block the original owner of &pending until all subsequent callers
|
|
@@ -2313,6 +2308,9 @@ do_complete:
|
|
*/
|
|
wait_var_event(&my_pending.refs, !refcount_read(&my_pending.refs));
|
|
|
|
+ /* ARGH */
|
|
+ WARN_ON_ONCE(my_pending.stop_pending);
|
|
+
|
|
return 0;
|
|
}
|
|
|
|
diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
|
|
index 08ae45ad9261d..f311bf85d2116 100644
|
|
--- a/kernel/sched/membarrier.c
|
|
+++ b/kernel/sched/membarrier.c
|
|
@@ -471,9 +471,7 @@ static int sync_runqueues_membarrier_state(struct mm_struct *mm)
|
|
}
|
|
rcu_read_unlock();
|
|
|
|
- preempt_disable();
|
|
- smp_call_function_many(tmpmask, ipi_sync_rq_state, mm, 1);
|
|
- preempt_enable();
|
|
+ on_each_cpu_mask(tmpmask, ipi_sync_rq_state, mm, true);
|
|
|
|
free_cpumask_var(tmpmask);
|
|
cpus_read_unlock();
|
|
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
|
|
index c9fbdd848138c..62fbd09b5dc1c 100644
|
|
--- a/kernel/sysctl.c
|
|
+++ b/kernel/sysctl.c
|
|
@@ -2962,7 +2962,7 @@ static struct ctl_table vm_table[] = {
|
|
.data = &block_dump,
|
|
.maxlen = sizeof(block_dump),
|
|
.mode = 0644,
|
|
- .proc_handler = proc_dointvec,
|
|
+ .proc_handler = proc_dointvec_minmax,
|
|
.extra1 = SYSCTL_ZERO,
|
|
},
|
|
{
|
|
@@ -2970,7 +2970,7 @@ static struct ctl_table vm_table[] = {
|
|
.data = &sysctl_vfs_cache_pressure,
|
|
.maxlen = sizeof(sysctl_vfs_cache_pressure),
|
|
.mode = 0644,
|
|
- .proc_handler = proc_dointvec,
|
|
+ .proc_handler = proc_dointvec_minmax,
|
|
.extra1 = SYSCTL_ZERO,
|
|
},
|
|
#if defined(HAVE_ARCH_PICK_MMAP_LAYOUT) || \
|
|
@@ -2980,7 +2980,7 @@ static struct ctl_table vm_table[] = {
|
|
.data = &sysctl_legacy_va_layout,
|
|
.maxlen = sizeof(sysctl_legacy_va_layout),
|
|
.mode = 0644,
|
|
- .proc_handler = proc_dointvec,
|
|
+ .proc_handler = proc_dointvec_minmax,
|
|
.extra1 = SYSCTL_ZERO,
|
|
},
|
|
#endif
|
|
@@ -2990,7 +2990,7 @@ static struct ctl_table vm_table[] = {
|
|
.data = &node_reclaim_mode,
|
|
.maxlen = sizeof(node_reclaim_mode),
|
|
.mode = 0644,
|
|
- .proc_handler = proc_dointvec,
|
|
+ .proc_handler = proc_dointvec_minmax,
|
|
.extra1 = SYSCTL_ZERO,
|
|
},
|
|
{
|
|
diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
|
|
index 743c852e10f23..788b9d137de4c 100644
|
|
--- a/kernel/time/hrtimer.c
|
|
+++ b/kernel/time/hrtimer.c
|
|
@@ -546,8 +546,11 @@ static ktime_t __hrtimer_next_event_base(struct hrtimer_cpu_base *cpu_base,
|
|
}
|
|
|
|
/*
|
|
- * Recomputes cpu_base::*next_timer and returns the earliest expires_next but
|
|
- * does not set cpu_base::*expires_next, that is done by hrtimer_reprogram.
|
|
+ * Recomputes cpu_base::*next_timer and returns the earliest expires_next
|
|
+ * but does not set cpu_base::*expires_next, that is done by
|
|
+ * hrtimer[_force]_reprogram and hrtimer_interrupt only. When updating
|
|
+ * cpu_base::*expires_next right away, reprogramming logic would no longer
|
|
+ * work.
|
|
*
|
|
* When a softirq is pending, we can ignore the HRTIMER_ACTIVE_SOFT bases,
|
|
* those timers will get run whenever the softirq gets handled, at the end of
|
|
@@ -588,6 +591,37 @@ __hrtimer_get_next_event(struct hrtimer_cpu_base *cpu_base, unsigned int active_
|
|
return expires_next;
|
|
}
|
|
|
|
+static ktime_t hrtimer_update_next_event(struct hrtimer_cpu_base *cpu_base)
|
|
+{
|
|
+ ktime_t expires_next, soft = KTIME_MAX;
|
|
+
|
|
+ /*
|
|
+ * If the soft interrupt has already been activated, ignore the
|
|
+ * soft bases. They will be handled in the already raised soft
|
|
+ * interrupt.
|
|
+ */
|
|
+ if (!cpu_base->softirq_activated) {
|
|
+ soft = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_SOFT);
|
|
+ /*
|
|
+ * Update the soft expiry time. clock_settime() might have
|
|
+ * affected it.
|
|
+ */
|
|
+ cpu_base->softirq_expires_next = soft;
|
|
+ }
|
|
+
|
|
+ expires_next = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_HARD);
|
|
+ /*
|
|
+ * If a softirq timer is expiring first, update cpu_base->next_timer
|
|
+ * and program the hardware with the soft expiry time.
|
|
+ */
|
|
+ if (expires_next > soft) {
|
|
+ cpu_base->next_timer = cpu_base->softirq_next_timer;
|
|
+ expires_next = soft;
|
|
+ }
|
|
+
|
|
+ return expires_next;
|
|
+}
|
|
+
|
|
static inline ktime_t hrtimer_update_base(struct hrtimer_cpu_base *base)
|
|
{
|
|
ktime_t *offs_real = &base->clock_base[HRTIMER_BASE_REALTIME].offset;
|
|
@@ -628,23 +662,7 @@ hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, int skip_equal)
|
|
{
|
|
ktime_t expires_next;
|
|
|
|
- /*
|
|
- * Find the current next expiration time.
|
|
- */
|
|
- expires_next = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_ALL);
|
|
-
|
|
- if (cpu_base->next_timer && cpu_base->next_timer->is_soft) {
|
|
- /*
|
|
- * When the softirq is activated, hrtimer has to be
|
|
- * programmed with the first hard hrtimer because soft
|
|
- * timer interrupt could occur too late.
|
|
- */
|
|
- if (cpu_base->softirq_activated)
|
|
- expires_next = __hrtimer_get_next_event(cpu_base,
|
|
- HRTIMER_ACTIVE_HARD);
|
|
- else
|
|
- cpu_base->softirq_expires_next = expires_next;
|
|
- }
|
|
+ expires_next = hrtimer_update_next_event(cpu_base);
|
|
|
|
if (skip_equal && expires_next == cpu_base->expires_next)
|
|
return;
|
|
@@ -1644,8 +1662,8 @@ retry:
|
|
|
|
__hrtimer_run_queues(cpu_base, now, flags, HRTIMER_ACTIVE_HARD);
|
|
|
|
- /* Reevaluate the clock bases for the next expiry */
|
|
- expires_next = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_ALL);
|
|
+ /* Reevaluate the clock bases for the [soft] next expiry */
|
|
+ expires_next = hrtimer_update_next_event(cpu_base);
|
|
/*
|
|
* Store the new expiry value so the migration code can verify
|
|
* against it.
|
|
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
|
|
index f5fa4ba126bf6..0d3b7940cf430 100644
|
|
--- a/lib/Kconfig.kasan
|
|
+++ b/lib/Kconfig.kasan
|
|
@@ -156,6 +156,7 @@ config KASAN_STACK_ENABLE
|
|
|
|
config KASAN_STACK
|
|
int
|
|
+ depends on KASAN_GENERIC || KASAN_SW_TAGS
|
|
default 1 if KASAN_STACK_ENABLE || CC_IS_GCC
|
|
default 0
|
|
|
|
diff --git a/lib/logic_pio.c b/lib/logic_pio.c
|
|
index f32fe481b4922..07b4b9a1f54b6 100644
|
|
--- a/lib/logic_pio.c
|
|
+++ b/lib/logic_pio.c
|
|
@@ -28,6 +28,8 @@ static DEFINE_MUTEX(io_range_mutex);
|
|
* @new_range: pointer to the IO range to be registered.
|
|
*
|
|
* Returns 0 on success, the error code in case of failure.
|
|
+ * If the range already exists, -EEXIST will be returned, which should be
|
|
+ * considered a success.
|
|
*
|
|
* Register a new IO range node in the IO range list.
|
|
*/
|
|
@@ -51,6 +53,7 @@ int logic_pio_register_range(struct logic_pio_hwaddr *new_range)
|
|
list_for_each_entry(range, &io_range_list, list) {
|
|
if (range->fwnode == new_range->fwnode) {
|
|
/* range already there */
|
|
+ ret = -EEXIST;
|
|
goto end_register;
|
|
}
|
|
if (range->flags == LOGIC_PIO_CPU_MMIO &&
|
|
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
|
|
index 2947274cc2d30..5a2f104ca13f8 100644
|
|
--- a/lib/test_kasan.c
|
|
+++ b/lib/test_kasan.c
|
|
@@ -737,13 +737,13 @@ static void kasan_bitops_tags(struct kunit *test)
|
|
return;
|
|
}
|
|
|
|
- /* Allocation size will be rounded to up granule size, which is 16. */
|
|
- bits = kzalloc(sizeof(*bits), GFP_KERNEL);
|
|
+ /* kmalloc-64 cache will be used and the last 16 bytes will be the redzone. */
|
|
+ bits = kzalloc(48, GFP_KERNEL);
|
|
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bits);
|
|
|
|
- /* Do the accesses past the 16 allocated bytes. */
|
|
- kasan_bitops_modify(test, BITS_PER_LONG, &bits[1]);
|
|
- kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, &bits[1]);
|
|
+ /* Do the accesses past the 48 allocated bytes, but within the redone. */
|
|
+ kasan_bitops_modify(test, BITS_PER_LONG, (void *)bits + 48);
|
|
+ kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, (void *)bits + 48);
|
|
|
|
kfree(bits);
|
|
}
|
|
diff --git a/mm/highmem.c b/mm/highmem.c
|
|
index 874b732b120ce..86f2b9495f9cf 100644
|
|
--- a/mm/highmem.c
|
|
+++ b/mm/highmem.c
|
|
@@ -368,20 +368,24 @@ void zero_user_segments(struct page *page, unsigned start1, unsigned end1,
|
|
|
|
BUG_ON(end1 > page_size(page) || end2 > page_size(page));
|
|
|
|
+ if (start1 >= end1)
|
|
+ start1 = end1 = 0;
|
|
+ if (start2 >= end2)
|
|
+ start2 = end2 = 0;
|
|
+
|
|
for (i = 0; i < compound_nr(page); i++) {
|
|
void *kaddr = NULL;
|
|
|
|
- if (start1 < PAGE_SIZE || start2 < PAGE_SIZE)
|
|
- kaddr = kmap_atomic(page + i);
|
|
-
|
|
if (start1 >= PAGE_SIZE) {
|
|
start1 -= PAGE_SIZE;
|
|
end1 -= PAGE_SIZE;
|
|
} else {
|
|
unsigned this_end = min_t(unsigned, end1, PAGE_SIZE);
|
|
|
|
- if (end1 > start1)
|
|
+ if (end1 > start1) {
|
|
+ kaddr = kmap_atomic(page + i);
|
|
memset(kaddr + start1, 0, this_end - start1);
|
|
+ }
|
|
end1 -= this_end;
|
|
start1 = 0;
|
|
}
|
|
@@ -392,8 +396,11 @@ void zero_user_segments(struct page *page, unsigned start1, unsigned end1,
|
|
} else {
|
|
unsigned this_end = min_t(unsigned, end2, PAGE_SIZE);
|
|
|
|
- if (end2 > start2)
|
|
+ if (end2 > start2) {
|
|
+ if (!kaddr)
|
|
+ kaddr = kmap_atomic(page + i);
|
|
memset(kaddr + start2, 0, this_end - start2);
|
|
+ }
|
|
end2 -= this_end;
|
|
start2 = 0;
|
|
}
|
|
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
|
|
index 91ca9b103ee52..f3affe860e2be 100644
|
|
--- a/mm/huge_memory.c
|
|
+++ b/mm/huge_memory.c
|
|
@@ -2465,7 +2465,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
|
|
int i;
|
|
|
|
/* complete memcg works before add pages to LRU */
|
|
- mem_cgroup_split_huge_fixup(head);
|
|
+ split_page_memcg(head, nr);
|
|
|
|
if (PageAnon(head) && PageSwapCache(head)) {
|
|
swp_entry_t entry = { .val = page_private(head) };
|
|
diff --git a/mm/madvise.c b/mm/madvise.c
|
|
index 6a660858784b8..a9bcd16b5d956 100644
|
|
--- a/mm/madvise.c
|
|
+++ b/mm/madvise.c
|
|
@@ -1197,12 +1197,22 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec,
|
|
goto release_task;
|
|
}
|
|
|
|
- mm = mm_access(task, PTRACE_MODE_ATTACH_FSCREDS);
|
|
+ /* Require PTRACE_MODE_READ to avoid leaking ASLR metadata. */
|
|
+ mm = mm_access(task, PTRACE_MODE_READ_FSCREDS);
|
|
if (IS_ERR_OR_NULL(mm)) {
|
|
ret = IS_ERR(mm) ? PTR_ERR(mm) : -ESRCH;
|
|
goto release_task;
|
|
}
|
|
|
|
+ /*
|
|
+ * Require CAP_SYS_NICE for influencing process performance. Note that
|
|
+ * only non-destructive hints are currently supported.
|
|
+ */
|
|
+ if (!capable(CAP_SYS_NICE)) {
|
|
+ ret = -EPERM;
|
|
+ goto release_mm;
|
|
+ }
|
|
+
|
|
total_len = iov_iter_count(&iter);
|
|
|
|
while (iov_iter_count(&iter)) {
|
|
@@ -1217,6 +1227,7 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec,
|
|
if (ret == 0)
|
|
ret = total_len - iov_iter_count(&iter);
|
|
|
|
+release_mm:
|
|
mmput(mm);
|
|
release_task:
|
|
put_task_struct(task);
|
|
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
|
|
index d76a1f9c0e552..aa9b9536649ab 100644
|
|
--- a/mm/memcontrol.c
|
|
+++ b/mm/memcontrol.c
|
|
@@ -3296,24 +3296,21 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
|
|
|
|
#endif /* CONFIG_MEMCG_KMEM */
|
|
|
|
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
|
/*
|
|
- * Because page_memcg(head) is not set on compound tails, set it now.
|
|
+ * Because page_memcg(head) is not set on tails, set it now.
|
|
*/
|
|
-void mem_cgroup_split_huge_fixup(struct page *head)
|
|
+void split_page_memcg(struct page *head, unsigned int nr)
|
|
{
|
|
struct mem_cgroup *memcg = page_memcg(head);
|
|
int i;
|
|
|
|
- if (mem_cgroup_disabled())
|
|
+ if (mem_cgroup_disabled() || !memcg)
|
|
return;
|
|
|
|
- for (i = 1; i < HPAGE_PMD_NR; i++) {
|
|
- css_get(&memcg->css);
|
|
- head[i].memcg_data = (unsigned long)memcg;
|
|
- }
|
|
+ for (i = 1; i < nr; i++)
|
|
+ head[i].memcg_data = head->memcg_data;
|
|
+ css_get_many(&memcg->css, nr - 1);
|
|
}
|
|
-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
|
|
|
|
#ifdef CONFIG_MEMCG_SWAP
|
|
/**
|
|
diff --git a/mm/memory.c b/mm/memory.c
|
|
index c05d4c4c03d6d..97e1d045f236f 100644
|
|
--- a/mm/memory.c
|
|
+++ b/mm/memory.c
|
|
@@ -3092,6 +3092,14 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
|
|
return handle_userfault(vmf, VM_UFFD_WP);
|
|
}
|
|
|
|
+ /*
|
|
+ * Userfaultfd write-protect can defer flushes. Ensure the TLB
|
|
+ * is flushed in this case before copying.
|
|
+ */
|
|
+ if (unlikely(userfaultfd_wp(vmf->vma) &&
|
|
+ mm_tlb_flush_pending(vmf->vma->vm_mm)))
|
|
+ flush_tlb_page(vmf->vma, vmf->address);
|
|
+
|
|
vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte);
|
|
if (!vmf->page) {
|
|
/*
|
|
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
|
|
index f9d57b9be8c71..fc16732d07f7f 100644
|
|
--- a/mm/memory_hotplug.c
|
|
+++ b/mm/memory_hotplug.c
|
|
@@ -1019,7 +1019,7 @@ static int online_memory_block(struct memory_block *mem, void *arg)
|
|
*/
|
|
int __ref add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
|
|
{
|
|
- struct mhp_params params = { .pgprot = PAGE_KERNEL };
|
|
+ struct mhp_params params = { .pgprot = pgprot_mhp(PAGE_KERNEL) };
|
|
u64 start, size;
|
|
bool new_node = false;
|
|
int ret;
|
|
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
|
|
index 519a60d5b6f7d..a723e81a5da2f 100644
|
|
--- a/mm/page_alloc.c
|
|
+++ b/mm/page_alloc.c
|
|
@@ -1281,6 +1281,12 @@ static __always_inline bool free_pages_prepare(struct page *page,
|
|
|
|
kernel_poison_pages(page, 1 << order);
|
|
|
|
+ /*
|
|
+ * With hardware tag-based KASAN, memory tags must be set before the
|
|
+ * page becomes unavailable via debug_pagealloc or arch_free_page.
|
|
+ */
|
|
+ kasan_free_nondeferred_pages(page, order);
|
|
+
|
|
/*
|
|
* arch_free_page() can make the page's contents inaccessible. s390
|
|
* does this. So nothing which can access the page's contents should
|
|
@@ -1290,8 +1296,6 @@ static __always_inline bool free_pages_prepare(struct page *page,
|
|
|
|
debug_pagealloc_unmap_pages(page, 1 << order);
|
|
|
|
- kasan_free_nondeferred_pages(page, order);
|
|
-
|
|
return true;
|
|
}
|
|
|
|
@@ -3309,6 +3313,7 @@ void split_page(struct page *page, unsigned int order)
|
|
for (i = 1; i < (1 << order); i++)
|
|
set_page_refcounted(page + i);
|
|
split_page_owner(page, 1 << order);
|
|
+ split_page_memcg(page, 1 << order);
|
|
}
|
|
EXPORT_SYMBOL_GPL(split_page);
|
|
|
|
@@ -6257,13 +6262,66 @@ static void __meminit zone_init_free_lists(struct zone *zone)
|
|
}
|
|
}
|
|
|
|
+#if !defined(CONFIG_FLAT_NODE_MEM_MAP)
|
|
+/*
|
|
+ * Only struct pages that correspond to ranges defined by memblock.memory
|
|
+ * are zeroed and initialized by going through __init_single_page() during
|
|
+ * memmap_init_zone().
|
|
+ *
|
|
+ * But, there could be struct pages that correspond to holes in
|
|
+ * memblock.memory. This can happen because of the following reasons:
|
|
+ * - physical memory bank size is not necessarily the exact multiple of the
|
|
+ * arbitrary section size
|
|
+ * - early reserved memory may not be listed in memblock.memory
|
|
+ * - memory layouts defined with memmap= kernel parameter may not align
|
|
+ * nicely with memmap sections
|
|
+ *
|
|
+ * Explicitly initialize those struct pages so that:
|
|
+ * - PG_Reserved is set
|
|
+ * - zone and node links point to zone and node that span the page if the
|
|
+ * hole is in the middle of a zone
|
|
+ * - zone and node links point to adjacent zone/node if the hole falls on
|
|
+ * the zone boundary; the pages in such holes will be prepended to the
|
|
+ * zone/node above the hole except for the trailing pages in the last
|
|
+ * section that will be appended to the zone/node below.
|
|
+ */
|
|
+static u64 __meminit init_unavailable_range(unsigned long spfn,
|
|
+ unsigned long epfn,
|
|
+ int zone, int node)
|
|
+{
|
|
+ unsigned long pfn;
|
|
+ u64 pgcnt = 0;
|
|
+
|
|
+ for (pfn = spfn; pfn < epfn; pfn++) {
|
|
+ if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) {
|
|
+ pfn = ALIGN_DOWN(pfn, pageblock_nr_pages)
|
|
+ + pageblock_nr_pages - 1;
|
|
+ continue;
|
|
+ }
|
|
+ __init_single_page(pfn_to_page(pfn), pfn, zone, node);
|
|
+ __SetPageReserved(pfn_to_page(pfn));
|
|
+ pgcnt++;
|
|
+ }
|
|
+
|
|
+ return pgcnt;
|
|
+}
|
|
+#else
|
|
+static inline u64 init_unavailable_range(unsigned long spfn, unsigned long epfn,
|
|
+ int zone, int node)
|
|
+{
|
|
+ return 0;
|
|
+}
|
|
+#endif
|
|
+
|
|
void __meminit __weak memmap_init(unsigned long size, int nid,
|
|
unsigned long zone,
|
|
unsigned long range_start_pfn)
|
|
{
|
|
+ static unsigned long hole_pfn;
|
|
unsigned long start_pfn, end_pfn;
|
|
unsigned long range_end_pfn = range_start_pfn + size;
|
|
int i;
|
|
+ u64 pgcnt = 0;
|
|
|
|
for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
|
|
start_pfn = clamp(start_pfn, range_start_pfn, range_end_pfn);
|
|
@@ -6274,7 +6332,29 @@ void __meminit __weak memmap_init(unsigned long size, int nid,
|
|
memmap_init_zone(size, nid, zone, start_pfn, range_end_pfn,
|
|
MEMINIT_EARLY, NULL, MIGRATE_MOVABLE);
|
|
}
|
|
+
|
|
+ if (hole_pfn < start_pfn)
|
|
+ pgcnt += init_unavailable_range(hole_pfn, start_pfn,
|
|
+ zone, nid);
|
|
+ hole_pfn = end_pfn;
|
|
}
|
|
+
|
|
+#ifdef CONFIG_SPARSEMEM
|
|
+ /*
|
|
+ * Initialize the hole in the range [zone_end_pfn, section_end].
|
|
+ * If zone boundary falls in the middle of a section, this hole
|
|
+ * will be re-initialized during the call to this function for the
|
|
+ * higher zone.
|
|
+ */
|
|
+ end_pfn = round_up(range_end_pfn, PAGES_PER_SECTION);
|
|
+ if (hole_pfn < end_pfn)
|
|
+ pgcnt += init_unavailable_range(hole_pfn, end_pfn,
|
|
+ zone, nid);
|
|
+#endif
|
|
+
|
|
+ if (pgcnt)
|
|
+ pr_info(" %s zone: %llu pages in unavailable ranges\n",
|
|
+ zone_names[zone], pgcnt);
|
|
}
|
|
|
|
static int zone_batchsize(struct zone *zone)
|
|
@@ -7075,88 +7155,6 @@ void __init free_area_init_memoryless_node(int nid)
|
|
free_area_init_node(nid);
|
|
}
|
|
|
|
-#if !defined(CONFIG_FLAT_NODE_MEM_MAP)
|
|
-/*
|
|
- * Initialize all valid struct pages in the range [spfn, epfn) and mark them
|
|
- * PageReserved(). Return the number of struct pages that were initialized.
|
|
- */
|
|
-static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn)
|
|
-{
|
|
- unsigned long pfn;
|
|
- u64 pgcnt = 0;
|
|
-
|
|
- for (pfn = spfn; pfn < epfn; pfn++) {
|
|
- if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) {
|
|
- pfn = ALIGN_DOWN(pfn, pageblock_nr_pages)
|
|
- + pageblock_nr_pages - 1;
|
|
- continue;
|
|
- }
|
|
- /*
|
|
- * Use a fake node/zone (0) for now. Some of these pages
|
|
- * (in memblock.reserved but not in memblock.memory) will
|
|
- * get re-initialized via reserve_bootmem_region() later.
|
|
- */
|
|
- __init_single_page(pfn_to_page(pfn), pfn, 0, 0);
|
|
- __SetPageReserved(pfn_to_page(pfn));
|
|
- pgcnt++;
|
|
- }
|
|
-
|
|
- return pgcnt;
|
|
-}
|
|
-
|
|
-/*
|
|
- * Only struct pages that are backed by physical memory are zeroed and
|
|
- * initialized by going through __init_single_page(). But, there are some
|
|
- * struct pages which are reserved in memblock allocator and their fields
|
|
- * may be accessed (for example page_to_pfn() on some configuration accesses
|
|
- * flags). We must explicitly initialize those struct pages.
|
|
- *
|
|
- * This function also addresses a similar issue where struct pages are left
|
|
- * uninitialized because the physical address range is not covered by
|
|
- * memblock.memory or memblock.reserved. That could happen when memblock
|
|
- * layout is manually configured via memmap=, or when the highest physical
|
|
- * address (max_pfn) does not end on a section boundary.
|
|
- */
|
|
-static void __init init_unavailable_mem(void)
|
|
-{
|
|
- phys_addr_t start, end;
|
|
- u64 i, pgcnt;
|
|
- phys_addr_t next = 0;
|
|
-
|
|
- /*
|
|
- * Loop through unavailable ranges not covered by memblock.memory.
|
|
- */
|
|
- pgcnt = 0;
|
|
- for_each_mem_range(i, &start, &end) {
|
|
- if (next < start)
|
|
- pgcnt += init_unavailable_range(PFN_DOWN(next),
|
|
- PFN_UP(start));
|
|
- next = end;
|
|
- }
|
|
-
|
|
- /*
|
|
- * Early sections always have a fully populated memmap for the whole
|
|
- * section - see pfn_valid(). If the last section has holes at the
|
|
- * end and that section is marked "online", the memmap will be
|
|
- * considered initialized. Make sure that memmap has a well defined
|
|
- * state.
|
|
- */
|
|
- pgcnt += init_unavailable_range(PFN_DOWN(next),
|
|
- round_up(max_pfn, PAGES_PER_SECTION));
|
|
-
|
|
- /*
|
|
- * Struct pages that do not have backing memory. This could be because
|
|
- * firmware is using some of this memory, or for some other reasons.
|
|
- */
|
|
- if (pgcnt)
|
|
- pr_info("Zeroed struct page in unavailable ranges: %lld pages", pgcnt);
|
|
-}
|
|
-#else
|
|
-static inline void __init init_unavailable_mem(void)
|
|
-{
|
|
-}
|
|
-#endif /* !CONFIG_FLAT_NODE_MEM_MAP */
|
|
-
|
|
#if MAX_NUMNODES > 1
|
|
/*
|
|
* Figure out the number of possible node ids.
|
|
@@ -7580,7 +7578,6 @@ void __init free_area_init(unsigned long *max_zone_pfn)
|
|
/* Initialise every node */
|
|
mminit_verify_pageflags_layout();
|
|
setup_nr_node_ids();
|
|
- init_unavailable_mem();
|
|
for_each_online_node(nid) {
|
|
pg_data_t *pgdat = NODE_DATA(nid);
|
|
free_area_init_node(nid);
|
|
diff --git a/mm/slub.c b/mm/slub.c
|
|
index 69dacc61b8435..c86037b382531 100644
|
|
--- a/mm/slub.c
|
|
+++ b/mm/slub.c
|
|
@@ -1973,7 +1973,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
|
|
|
|
t = acquire_slab(s, n, page, object == NULL, &objects);
|
|
if (!t)
|
|
- continue; /* cmpxchg raced */
|
|
+ break;
|
|
|
|
available += objects;
|
|
if (!object) {
|
|
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
|
|
index 28b8242f18d79..2b784d62a9fe7 100644
|
|
--- a/net/core/skbuff.c
|
|
+++ b/net/core/skbuff.c
|
|
@@ -3622,6 +3622,8 @@ unsigned int skb_find_text(struct sk_buff *skb, unsigned int from,
|
|
struct ts_state state;
|
|
unsigned int ret;
|
|
|
|
+ BUILD_BUG_ON(sizeof(struct skb_seq_state) > sizeof(state.cb));
|
|
+
|
|
config->get_next_block = skb_ts_get_next_block;
|
|
config->finish = skb_ts_finish;
|
|
|
|
diff --git a/net/dsa/tag_mtk.c b/net/dsa/tag_mtk.c
|
|
index 38dcdded74c05..59748487664fe 100644
|
|
--- a/net/dsa/tag_mtk.c
|
|
+++ b/net/dsa/tag_mtk.c
|
|
@@ -13,6 +13,7 @@
|
|
#define MTK_HDR_LEN 4
|
|
#define MTK_HDR_XMIT_UNTAGGED 0
|
|
#define MTK_HDR_XMIT_TAGGED_TPID_8100 1
|
|
+#define MTK_HDR_XMIT_TAGGED_TPID_88A8 2
|
|
#define MTK_HDR_RECV_SOURCE_PORT_MASK GENMASK(2, 0)
|
|
#define MTK_HDR_XMIT_DP_BIT_MASK GENMASK(5, 0)
|
|
#define MTK_HDR_XMIT_SA_DIS BIT(6)
|
|
@@ -21,8 +22,8 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
|
|
struct net_device *dev)
|
|
{
|
|
struct dsa_port *dp = dsa_slave_to_port(dev);
|
|
+ u8 xmit_tpid;
|
|
u8 *mtk_tag;
|
|
- bool is_vlan_skb = true;
|
|
unsigned char *dest = eth_hdr(skb)->h_dest;
|
|
bool is_multicast_skb = is_multicast_ether_addr(dest) &&
|
|
!is_broadcast_ether_addr(dest);
|
|
@@ -33,10 +34,17 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
|
|
* the both special and VLAN tag at the same time and then look up VLAN
|
|
* table with VID.
|
|
*/
|
|
- if (!skb_vlan_tagged(skb)) {
|
|
+ switch (skb->protocol) {
|
|
+ case htons(ETH_P_8021Q):
|
|
+ xmit_tpid = MTK_HDR_XMIT_TAGGED_TPID_8100;
|
|
+ break;
|
|
+ case htons(ETH_P_8021AD):
|
|
+ xmit_tpid = MTK_HDR_XMIT_TAGGED_TPID_88A8;
|
|
+ break;
|
|
+ default:
|
|
+ xmit_tpid = MTK_HDR_XMIT_UNTAGGED;
|
|
skb_push(skb, MTK_HDR_LEN);
|
|
memmove(skb->data, skb->data + MTK_HDR_LEN, 2 * ETH_ALEN);
|
|
- is_vlan_skb = false;
|
|
}
|
|
|
|
mtk_tag = skb->data + 2 * ETH_ALEN;
|
|
@@ -44,8 +52,7 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
|
|
/* Mark tag attribute on special tag insertion to notify hardware
|
|
* whether that's a combined special tag with 802.1Q header.
|
|
*/
|
|
- mtk_tag[0] = is_vlan_skb ? MTK_HDR_XMIT_TAGGED_TPID_8100 :
|
|
- MTK_HDR_XMIT_UNTAGGED;
|
|
+ mtk_tag[0] = xmit_tpid;
|
|
mtk_tag[1] = (1 << dp->index) & MTK_HDR_XMIT_DP_BIT_MASK;
|
|
|
|
/* Disable SA learning for multicast frames */
|
|
@@ -53,7 +60,7 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
|
|
mtk_tag[1] |= MTK_HDR_XMIT_SA_DIS;
|
|
|
|
/* Tag control information is kept for 802.1Q */
|
|
- if (!is_vlan_skb) {
|
|
+ if (xmit_tpid == MTK_HDR_XMIT_UNTAGGED) {
|
|
mtk_tag[2] = 0;
|
|
mtk_tag[3] = 0;
|
|
}
|
|
diff --git a/net/dsa/tag_rtl4_a.c b/net/dsa/tag_rtl4_a.c
|
|
index c17d39b4a1a04..e9176475bac89 100644
|
|
--- a/net/dsa/tag_rtl4_a.c
|
|
+++ b/net/dsa/tag_rtl4_a.c
|
|
@@ -35,14 +35,12 @@ static struct sk_buff *rtl4a_tag_xmit(struct sk_buff *skb,
|
|
struct net_device *dev)
|
|
{
|
|
struct dsa_port *dp = dsa_slave_to_port(dev);
|
|
+ __be16 *p;
|
|
u8 *tag;
|
|
- u16 *p;
|
|
u16 out;
|
|
|
|
/* Pad out to at least 60 bytes */
|
|
- if (unlikely(eth_skb_pad(skb)))
|
|
- return NULL;
|
|
- if (skb_cow_head(skb, RTL4_A_HDR_LEN) < 0)
|
|
+ if (unlikely(__skb_put_padto(skb, ETH_ZLEN, false)))
|
|
return NULL;
|
|
|
|
netdev_dbg(dev, "add realtek tag to package to port %d\n",
|
|
@@ -53,13 +51,13 @@ static struct sk_buff *rtl4a_tag_xmit(struct sk_buff *skb,
|
|
tag = skb->data + 2 * ETH_ALEN;
|
|
|
|
/* Set Ethertype */
|
|
- p = (u16 *)tag;
|
|
+ p = (__be16 *)tag;
|
|
*p = htons(RTL4_A_ETHERTYPE);
|
|
|
|
out = (RTL4_A_PROTOCOL_RTL8366RB << 12) | (2 << 8);
|
|
- /* The lower bits is the port numer */
|
|
+ /* The lower bits is the port number */
|
|
out |= (u8)dp->index;
|
|
- p = (u16 *)(tag + 2);
|
|
+ p = (__be16 *)(tag + 2);
|
|
*p = htons(out);
|
|
|
|
return skb;
|
|
diff --git a/net/ethtool/channels.c b/net/ethtool/channels.c
|
|
index 25a9e566ef5cd..6a070dc8e4b0d 100644
|
|
--- a/net/ethtool/channels.c
|
|
+++ b/net/ethtool/channels.c
|
|
@@ -116,10 +116,9 @@ int ethnl_set_channels(struct sk_buff *skb, struct genl_info *info)
|
|
struct ethtool_channels channels = {};
|
|
struct ethnl_req_info req_info = {};
|
|
struct nlattr **tb = info->attrs;
|
|
- const struct nlattr *err_attr;
|
|
+ u32 err_attr, max_rx_in_use = 0;
|
|
const struct ethtool_ops *ops;
|
|
struct net_device *dev;
|
|
- u32 max_rx_in_use = 0;
|
|
int ret;
|
|
|
|
ret = ethnl_parse_header_dev_get(&req_info,
|
|
@@ -157,34 +156,35 @@ int ethnl_set_channels(struct sk_buff *skb, struct genl_info *info)
|
|
|
|
/* ensure new channel counts are within limits */
|
|
if (channels.rx_count > channels.max_rx)
|
|
- err_attr = tb[ETHTOOL_A_CHANNELS_RX_COUNT];
|
|
+ err_attr = ETHTOOL_A_CHANNELS_RX_COUNT;
|
|
else if (channels.tx_count > channels.max_tx)
|
|
- err_attr = tb[ETHTOOL_A_CHANNELS_TX_COUNT];
|
|
+ err_attr = ETHTOOL_A_CHANNELS_TX_COUNT;
|
|
else if (channels.other_count > channels.max_other)
|
|
- err_attr = tb[ETHTOOL_A_CHANNELS_OTHER_COUNT];
|
|
+ err_attr = ETHTOOL_A_CHANNELS_OTHER_COUNT;
|
|
else if (channels.combined_count > channels.max_combined)
|
|
- err_attr = tb[ETHTOOL_A_CHANNELS_COMBINED_COUNT];
|
|
+ err_attr = ETHTOOL_A_CHANNELS_COMBINED_COUNT;
|
|
else
|
|
- err_attr = NULL;
|
|
+ err_attr = 0;
|
|
if (err_attr) {
|
|
ret = -EINVAL;
|
|
- NL_SET_ERR_MSG_ATTR(info->extack, err_attr,
|
|
+ NL_SET_ERR_MSG_ATTR(info->extack, tb[err_attr],
|
|
"requested channel count exceeds maximum");
|
|
goto out_ops;
|
|
}
|
|
|
|
/* ensure there is at least one RX and one TX channel */
|
|
if (!channels.combined_count && !channels.rx_count)
|
|
- err_attr = tb[ETHTOOL_A_CHANNELS_RX_COUNT];
|
|
+ err_attr = ETHTOOL_A_CHANNELS_RX_COUNT;
|
|
else if (!channels.combined_count && !channels.tx_count)
|
|
- err_attr = tb[ETHTOOL_A_CHANNELS_TX_COUNT];
|
|
+ err_attr = ETHTOOL_A_CHANNELS_TX_COUNT;
|
|
else
|
|
- err_attr = NULL;
|
|
+ err_attr = 0;
|
|
if (err_attr) {
|
|
if (mod_combined)
|
|
- err_attr = tb[ETHTOOL_A_CHANNELS_COMBINED_COUNT];
|
|
+ err_attr = ETHTOOL_A_CHANNELS_COMBINED_COUNT;
|
|
ret = -EINVAL;
|
|
- NL_SET_ERR_MSG_ATTR(info->extack, err_attr, "requested channel counts would result in no RX or TX channel being configured");
|
|
+ NL_SET_ERR_MSG_ATTR(info->extack, tb[err_attr],
|
|
+ "requested channel counts would result in no RX or TX channel being configured");
|
|
goto out_ops;
|
|
}
|
|
|
|
diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
|
|
index 471d33a0d095f..be09c7669a799 100644
|
|
--- a/net/ipv4/cipso_ipv4.c
|
|
+++ b/net/ipv4/cipso_ipv4.c
|
|
@@ -519,16 +519,10 @@ int cipso_v4_doi_remove(u32 doi, struct netlbl_audit *audit_info)
|
|
ret_val = -ENOENT;
|
|
goto doi_remove_return;
|
|
}
|
|
- if (!refcount_dec_and_test(&doi_def->refcount)) {
|
|
- spin_unlock(&cipso_v4_doi_list_lock);
|
|
- ret_val = -EBUSY;
|
|
- goto doi_remove_return;
|
|
- }
|
|
list_del_rcu(&doi_def->list);
|
|
spin_unlock(&cipso_v4_doi_list_lock);
|
|
|
|
- cipso_v4_cache_invalidate();
|
|
- call_rcu(&doi_def->rcu, cipso_v4_doi_free_rcu);
|
|
+ cipso_v4_doi_putdef(doi_def);
|
|
ret_val = 0;
|
|
|
|
doi_remove_return:
|
|
@@ -585,9 +579,6 @@ void cipso_v4_doi_putdef(struct cipso_v4_doi *doi_def)
|
|
|
|
if (!refcount_dec_and_test(&doi_def->refcount))
|
|
return;
|
|
- spin_lock(&cipso_v4_doi_list_lock);
|
|
- list_del_rcu(&doi_def->list);
|
|
- spin_unlock(&cipso_v4_doi_list_lock);
|
|
|
|
cipso_v4_cache_invalidate();
|
|
call_rcu(&doi_def->rcu, cipso_v4_doi_free_rcu);
|
|
diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
|
|
index 76a420c76f16e..f6cc26de5ed30 100644
|
|
--- a/net/ipv4/ip_tunnel.c
|
|
+++ b/net/ipv4/ip_tunnel.c
|
|
@@ -502,8 +502,7 @@ static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
|
|
if (!skb_is_gso(skb) &&
|
|
(inner_iph->frag_off & htons(IP_DF)) &&
|
|
mtu < pkt_size) {
|
|
- memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
|
|
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
|
|
+ icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
|
|
return -E2BIG;
|
|
}
|
|
}
|
|
@@ -527,7 +526,7 @@ static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
|
|
|
|
if (!skb_is_gso(skb) && mtu >= IPV6_MIN_MTU &&
|
|
mtu < pkt_size) {
|
|
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
return -E2BIG;
|
|
}
|
|
}
|
|
diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
|
|
index abc171e79d3e4..eb207089ece0b 100644
|
|
--- a/net/ipv4/ip_vti.c
|
|
+++ b/net/ipv4/ip_vti.c
|
|
@@ -238,13 +238,13 @@ static netdev_tx_t vti_xmit(struct sk_buff *skb, struct net_device *dev,
|
|
if (skb->len > mtu) {
|
|
skb_dst_update_pmtu_no_confirm(skb, mtu);
|
|
if (skb->protocol == htons(ETH_P_IP)) {
|
|
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
|
|
- htonl(mtu));
|
|
+ icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
|
|
+ htonl(mtu));
|
|
} else {
|
|
if (mtu < IPV6_MIN_MTU)
|
|
mtu = IPV6_MIN_MTU;
|
|
|
|
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
}
|
|
|
|
dst_release(dst);
|
|
diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
|
|
index e53e43aef7854..bad4037d257bc 100644
|
|
--- a/net/ipv4/nexthop.c
|
|
+++ b/net/ipv4/nexthop.c
|
|
@@ -1364,7 +1364,7 @@ out:
|
|
|
|
/* rtnl */
|
|
/* remove all nexthops tied to a device being deleted */
|
|
-static void nexthop_flush_dev(struct net_device *dev)
|
|
+static void nexthop_flush_dev(struct net_device *dev, unsigned long event)
|
|
{
|
|
unsigned int hash = nh_dev_hashfn(dev->ifindex);
|
|
struct net *net = dev_net(dev);
|
|
@@ -1376,6 +1376,10 @@ static void nexthop_flush_dev(struct net_device *dev)
|
|
if (nhi->fib_nhc.nhc_dev != dev)
|
|
continue;
|
|
|
|
+ if (nhi->reject_nh &&
|
|
+ (event == NETDEV_DOWN || event == NETDEV_CHANGE))
|
|
+ continue;
|
|
+
|
|
remove_nexthop(net, nhi->nh_parent, NULL);
|
|
}
|
|
}
|
|
@@ -2122,11 +2126,11 @@ static int nh_netdev_event(struct notifier_block *this,
|
|
switch (event) {
|
|
case NETDEV_DOWN:
|
|
case NETDEV_UNREGISTER:
|
|
- nexthop_flush_dev(dev);
|
|
+ nexthop_flush_dev(dev, event);
|
|
break;
|
|
case NETDEV_CHANGE:
|
|
if (!(dev_get_flags(dev) & (IFF_RUNNING | IFF_LOWER_UP)))
|
|
- nexthop_flush_dev(dev);
|
|
+ nexthop_flush_dev(dev, event);
|
|
break;
|
|
case NETDEV_CHANGEMTU:
|
|
info_ext = ptr;
|
|
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
|
|
index 32545ecf2ab10..1b10c54ce4718 100644
|
|
--- a/net/ipv4/tcp.c
|
|
+++ b/net/ipv4/tcp.c
|
|
@@ -3431,16 +3431,23 @@ static int do_tcp_setsockopt(struct sock *sk, int level, int optname,
|
|
break;
|
|
|
|
case TCP_QUEUE_SEQ:
|
|
- if (sk->sk_state != TCP_CLOSE)
|
|
+ if (sk->sk_state != TCP_CLOSE) {
|
|
err = -EPERM;
|
|
- else if (tp->repair_queue == TCP_SEND_QUEUE)
|
|
- WRITE_ONCE(tp->write_seq, val);
|
|
- else if (tp->repair_queue == TCP_RECV_QUEUE) {
|
|
- WRITE_ONCE(tp->rcv_nxt, val);
|
|
- WRITE_ONCE(tp->copied_seq, val);
|
|
- }
|
|
- else
|
|
+ } else if (tp->repair_queue == TCP_SEND_QUEUE) {
|
|
+ if (!tcp_rtx_queue_empty(sk))
|
|
+ err = -EPERM;
|
|
+ else
|
|
+ WRITE_ONCE(tp->write_seq, val);
|
|
+ } else if (tp->repair_queue == TCP_RECV_QUEUE) {
|
|
+ if (tp->rcv_nxt != tp->copied_seq) {
|
|
+ err = -EPERM;
|
|
+ } else {
|
|
+ WRITE_ONCE(tp->rcv_nxt, val);
|
|
+ WRITE_ONCE(tp->copied_seq, val);
|
|
+ }
|
|
+ } else {
|
|
err = -EINVAL;
|
|
+ }
|
|
break;
|
|
|
|
case TCP_REPAIR_OPTIONS:
|
|
@@ -4088,7 +4095,8 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
|
|
|
|
if (get_user(len, optlen))
|
|
return -EFAULT;
|
|
- if (len < offsetofend(struct tcp_zerocopy_receive, length))
|
|
+ if (len < 0 ||
|
|
+ len < offsetofend(struct tcp_zerocopy_receive, length))
|
|
return -EINVAL;
|
|
if (len > sizeof(zc)) {
|
|
len = sizeof(zc);
|
|
diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
|
|
index cfc872689b997..ab770f7ccb307 100644
|
|
--- a/net/ipv4/udp_offload.c
|
|
+++ b/net/ipv4/udp_offload.c
|
|
@@ -525,7 +525,7 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb,
|
|
}
|
|
|
|
if (!sk || NAPI_GRO_CB(skb)->encap_mark ||
|
|
- (skb->ip_summed != CHECKSUM_PARTIAL &&
|
|
+ (uh->check && skb->ip_summed != CHECKSUM_PARTIAL &&
|
|
NAPI_GRO_CB(skb)->csum_cnt == 0 &&
|
|
!NAPI_GRO_CB(skb)->csum_valid) ||
|
|
!udp_sk(sk)->gro_receive)
|
|
diff --git a/net/ipv6/calipso.c b/net/ipv6/calipso.c
|
|
index 51184a70ac7e5..1578ed9e97d89 100644
|
|
--- a/net/ipv6/calipso.c
|
|
+++ b/net/ipv6/calipso.c
|
|
@@ -83,6 +83,9 @@ struct calipso_map_cache_entry {
|
|
|
|
static struct calipso_map_cache_bkt *calipso_cache;
|
|
|
|
+static void calipso_cache_invalidate(void);
|
|
+static void calipso_doi_putdef(struct calipso_doi *doi_def);
|
|
+
|
|
/* Label Mapping Cache Functions
|
|
*/
|
|
|
|
@@ -444,15 +447,10 @@ static int calipso_doi_remove(u32 doi, struct netlbl_audit *audit_info)
|
|
ret_val = -ENOENT;
|
|
goto doi_remove_return;
|
|
}
|
|
- if (!refcount_dec_and_test(&doi_def->refcount)) {
|
|
- spin_unlock(&calipso_doi_list_lock);
|
|
- ret_val = -EBUSY;
|
|
- goto doi_remove_return;
|
|
- }
|
|
list_del_rcu(&doi_def->list);
|
|
spin_unlock(&calipso_doi_list_lock);
|
|
|
|
- call_rcu(&doi_def->rcu, calipso_doi_free_rcu);
|
|
+ calipso_doi_putdef(doi_def);
|
|
ret_val = 0;
|
|
|
|
doi_remove_return:
|
|
@@ -508,10 +506,8 @@ static void calipso_doi_putdef(struct calipso_doi *doi_def)
|
|
|
|
if (!refcount_dec_and_test(&doi_def->refcount))
|
|
return;
|
|
- spin_lock(&calipso_doi_list_lock);
|
|
- list_del_rcu(&doi_def->list);
|
|
- spin_unlock(&calipso_doi_list_lock);
|
|
|
|
+ calipso_cache_invalidate();
|
|
call_rcu(&doi_def->rcu, calipso_doi_free_rcu);
|
|
}
|
|
|
|
diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
|
|
index c3bc89b6b1a1a..1baf43aacb2e4 100644
|
|
--- a/net/ipv6/ip6_gre.c
|
|
+++ b/net/ipv6/ip6_gre.c
|
|
@@ -678,8 +678,8 @@ static int prepare_ip6gre_xmit_ipv6(struct sk_buff *skb,
|
|
|
|
tel = (struct ipv6_tlv_tnl_enc_lim *)&skb_network_header(skb)[offset];
|
|
if (tel->encap_limit == 0) {
|
|
- icmpv6_send(skb, ICMPV6_PARAMPROB,
|
|
- ICMPV6_HDR_FIELD, offset + 2);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PARAMPROB,
|
|
+ ICMPV6_HDR_FIELD, offset + 2);
|
|
return -1;
|
|
}
|
|
*encap_limit = tel->encap_limit - 1;
|
|
@@ -805,8 +805,8 @@ static inline int ip6gre_xmit_ipv4(struct sk_buff *skb, struct net_device *dev)
|
|
if (err != 0) {
|
|
/* XXX: send ICMP error even if DF is not set. */
|
|
if (err == -EMSGSIZE)
|
|
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
|
|
- htonl(mtu));
|
|
+ icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
|
|
+ htonl(mtu));
|
|
return -1;
|
|
}
|
|
|
|
@@ -837,7 +837,7 @@ static inline int ip6gre_xmit_ipv6(struct sk_buff *skb, struct net_device *dev)
|
|
&mtu, skb->protocol);
|
|
if (err != 0) {
|
|
if (err == -EMSGSIZE)
|
|
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
return -1;
|
|
}
|
|
|
|
@@ -1063,10 +1063,10 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
|
|
/* XXX: send ICMP error even if DF is not set. */
|
|
if (err == -EMSGSIZE) {
|
|
if (skb->protocol == htons(ETH_P_IP))
|
|
- icmp_send(skb, ICMP_DEST_UNREACH,
|
|
- ICMP_FRAG_NEEDED, htonl(mtu));
|
|
+ icmp_ndo_send(skb, ICMP_DEST_UNREACH,
|
|
+ ICMP_FRAG_NEEDED, htonl(mtu));
|
|
else
|
|
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
}
|
|
|
|
goto tx_err;
|
|
diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
|
|
index a7950baa05e51..3fa0eca5a06f8 100644
|
|
--- a/net/ipv6/ip6_tunnel.c
|
|
+++ b/net/ipv6/ip6_tunnel.c
|
|
@@ -1332,8 +1332,8 @@ ipxip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev,
|
|
|
|
tel = (void *)&skb_network_header(skb)[offset];
|
|
if (tel->encap_limit == 0) {
|
|
- icmpv6_send(skb, ICMPV6_PARAMPROB,
|
|
- ICMPV6_HDR_FIELD, offset + 2);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PARAMPROB,
|
|
+ ICMPV6_HDR_FIELD, offset + 2);
|
|
return -1;
|
|
}
|
|
encap_limit = tel->encap_limit - 1;
|
|
@@ -1385,11 +1385,11 @@ ipxip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev,
|
|
if (err == -EMSGSIZE)
|
|
switch (protocol) {
|
|
case IPPROTO_IPIP:
|
|
- icmp_send(skb, ICMP_DEST_UNREACH,
|
|
- ICMP_FRAG_NEEDED, htonl(mtu));
|
|
+ icmp_ndo_send(skb, ICMP_DEST_UNREACH,
|
|
+ ICMP_FRAG_NEEDED, htonl(mtu));
|
|
break;
|
|
case IPPROTO_IPV6:
|
|
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
break;
|
|
default:
|
|
break;
|
|
diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
|
|
index 0225fd6941925..f10e7a72ea624 100644
|
|
--- a/net/ipv6/ip6_vti.c
|
|
+++ b/net/ipv6/ip6_vti.c
|
|
@@ -521,10 +521,10 @@ vti6_xmit(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
|
|
if (mtu < IPV6_MIN_MTU)
|
|
mtu = IPV6_MIN_MTU;
|
|
|
|
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
} else {
|
|
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
|
|
- htonl(mtu));
|
|
+ icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
|
|
+ htonl(mtu));
|
|
}
|
|
|
|
err = -EMSGSIZE;
|
|
diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
|
|
index 93636867aee28..63ccd9f2dcccf 100644
|
|
--- a/net/ipv6/sit.c
|
|
+++ b/net/ipv6/sit.c
|
|
@@ -987,7 +987,7 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb,
|
|
skb_dst_update_pmtu_no_confirm(skb, mtu);
|
|
|
|
if (skb->len > mtu && !skb_is_gso(skb)) {
|
|
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
ip_rt_put(rt);
|
|
goto tx_error;
|
|
}
|
|
diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
|
|
index 7be5103ff2a84..203890e378cb0 100644
|
|
--- a/net/l2tp/l2tp_core.c
|
|
+++ b/net/l2tp/l2tp_core.c
|
|
@@ -649,9 +649,9 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb,
|
|
/* Parse and check optional cookie */
|
|
if (session->peer_cookie_len > 0) {
|
|
if (memcmp(ptr, &session->peer_cookie[0], session->peer_cookie_len)) {
|
|
- pr_warn_ratelimited("%s: cookie mismatch (%u/%u). Discarding.\n",
|
|
- tunnel->name, tunnel->tunnel_id,
|
|
- session->session_id);
|
|
+ pr_debug_ratelimited("%s: cookie mismatch (%u/%u). Discarding.\n",
|
|
+ tunnel->name, tunnel->tunnel_id,
|
|
+ session->session_id);
|
|
atomic_long_inc(&session->stats.rx_cookie_discards);
|
|
goto discard;
|
|
}
|
|
@@ -702,8 +702,8 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb,
|
|
* If user has configured mandatory sequence numbers, discard.
|
|
*/
|
|
if (session->recv_seq) {
|
|
- pr_warn_ratelimited("%s: recv data has no seq numbers when required. Discarding.\n",
|
|
- session->name);
|
|
+ pr_debug_ratelimited("%s: recv data has no seq numbers when required. Discarding.\n",
|
|
+ session->name);
|
|
atomic_long_inc(&session->stats.rx_seq_discards);
|
|
goto discard;
|
|
}
|
|
@@ -718,8 +718,8 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb,
|
|
session->send_seq = 0;
|
|
l2tp_session_set_header_len(session, tunnel->version);
|
|
} else if (session->send_seq) {
|
|
- pr_warn_ratelimited("%s: recv data has no seq numbers when required. Discarding.\n",
|
|
- session->name);
|
|
+ pr_debug_ratelimited("%s: recv data has no seq numbers when required. Discarding.\n",
|
|
+ session->name);
|
|
atomic_long_inc(&session->stats.rx_seq_discards);
|
|
goto discard;
|
|
}
|
|
@@ -809,9 +809,9 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
|
|
|
|
/* Short packet? */
|
|
if (!pskb_may_pull(skb, L2TP_HDR_SIZE_MAX)) {
|
|
- pr_warn_ratelimited("%s: recv short packet (len=%d)\n",
|
|
- tunnel->name, skb->len);
|
|
- goto error;
|
|
+ pr_debug_ratelimited("%s: recv short packet (len=%d)\n",
|
|
+ tunnel->name, skb->len);
|
|
+ goto invalid;
|
|
}
|
|
|
|
/* Point to L2TP header */
|
|
@@ -824,9 +824,9 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
|
|
/* Check protocol version */
|
|
version = hdrflags & L2TP_HDR_VER_MASK;
|
|
if (version != tunnel->version) {
|
|
- pr_warn_ratelimited("%s: recv protocol version mismatch: got %d expected %d\n",
|
|
- tunnel->name, version, tunnel->version);
|
|
- goto error;
|
|
+ pr_debug_ratelimited("%s: recv protocol version mismatch: got %d expected %d\n",
|
|
+ tunnel->name, version, tunnel->version);
|
|
+ goto invalid;
|
|
}
|
|
|
|
/* Get length of L2TP packet */
|
|
@@ -834,7 +834,7 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
|
|
|
|
/* If type is control packet, it is handled by userspace. */
|
|
if (hdrflags & L2TP_HDRFLAG_T)
|
|
- goto error;
|
|
+ goto pass;
|
|
|
|
/* Skip flags */
|
|
ptr += 2;
|
|
@@ -863,21 +863,24 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
|
|
l2tp_session_dec_refcount(session);
|
|
|
|
/* Not found? Pass to userspace to deal with */
|
|
- pr_warn_ratelimited("%s: no session found (%u/%u). Passing up.\n",
|
|
- tunnel->name, tunnel_id, session_id);
|
|
- goto error;
|
|
+ pr_debug_ratelimited("%s: no session found (%u/%u). Passing up.\n",
|
|
+ tunnel->name, tunnel_id, session_id);
|
|
+ goto pass;
|
|
}
|
|
|
|
if (tunnel->version == L2TP_HDR_VER_3 &&
|
|
l2tp_v3_ensure_opt_in_linear(session, skb, &ptr, &optr))
|
|
- goto error;
|
|
+ goto invalid;
|
|
|
|
l2tp_recv_common(session, skb, ptr, optr, hdrflags, length);
|
|
l2tp_session_dec_refcount(session);
|
|
|
|
return 0;
|
|
|
|
-error:
|
|
+invalid:
|
|
+ atomic_long_inc(&tunnel->stats.rx_invalid);
|
|
+
|
|
+pass:
|
|
/* Put UDP header back */
|
|
__skb_push(skb, sizeof(struct udphdr));
|
|
|
|
diff --git a/net/l2tp/l2tp_core.h b/net/l2tp/l2tp_core.h
|
|
index cb21d906343e8..98ea98eb9567b 100644
|
|
--- a/net/l2tp/l2tp_core.h
|
|
+++ b/net/l2tp/l2tp_core.h
|
|
@@ -39,6 +39,7 @@ struct l2tp_stats {
|
|
atomic_long_t rx_oos_packets;
|
|
atomic_long_t rx_errors;
|
|
atomic_long_t rx_cookie_discards;
|
|
+ atomic_long_t rx_invalid;
|
|
};
|
|
|
|
struct l2tp_tunnel;
|
|
diff --git a/net/l2tp/l2tp_netlink.c b/net/l2tp/l2tp_netlink.c
|
|
index 83956c9ee1fcc..96eb91be9238b 100644
|
|
--- a/net/l2tp/l2tp_netlink.c
|
|
+++ b/net/l2tp/l2tp_netlink.c
|
|
@@ -428,6 +428,9 @@ static int l2tp_nl_tunnel_send(struct sk_buff *skb, u32 portid, u32 seq, int fla
|
|
L2TP_ATTR_STATS_PAD) ||
|
|
nla_put_u64_64bit(skb, L2TP_ATTR_RX_ERRORS,
|
|
atomic_long_read(&tunnel->stats.rx_errors),
|
|
+ L2TP_ATTR_STATS_PAD) ||
|
|
+ nla_put_u64_64bit(skb, L2TP_ATTR_RX_INVALID,
|
|
+ atomic_long_read(&tunnel->stats.rx_invalid),
|
|
L2TP_ATTR_STATS_PAD))
|
|
goto nla_put_failure;
|
|
nla_nest_end(skb, nest);
|
|
@@ -771,6 +774,9 @@ static int l2tp_nl_session_send(struct sk_buff *skb, u32 portid, u32 seq, int fl
|
|
L2TP_ATTR_STATS_PAD) ||
|
|
nla_put_u64_64bit(skb, L2TP_ATTR_RX_ERRORS,
|
|
atomic_long_read(&session->stats.rx_errors),
|
|
+ L2TP_ATTR_STATS_PAD) ||
|
|
+ nla_put_u64_64bit(skb, L2TP_ATTR_RX_INVALID,
|
|
+ atomic_long_read(&session->stats.rx_invalid),
|
|
L2TP_ATTR_STATS_PAD))
|
|
goto nla_put_failure;
|
|
nla_nest_end(skb, nest);
|
|
diff --git a/net/mpls/mpls_gso.c b/net/mpls/mpls_gso.c
|
|
index b1690149b6fa0..1482259de9b5d 100644
|
|
--- a/net/mpls/mpls_gso.c
|
|
+++ b/net/mpls/mpls_gso.c
|
|
@@ -14,6 +14,7 @@
|
|
#include <linux/netdev_features.h>
|
|
#include <linux/netdevice.h>
|
|
#include <linux/skbuff.h>
|
|
+#include <net/mpls.h>
|
|
|
|
static struct sk_buff *mpls_gso_segment(struct sk_buff *skb,
|
|
netdev_features_t features)
|
|
@@ -27,6 +28,8 @@ static struct sk_buff *mpls_gso_segment(struct sk_buff *skb,
|
|
|
|
skb_reset_network_header(skb);
|
|
mpls_hlen = skb_inner_network_header(skb) - skb_network_header(skb);
|
|
+ if (unlikely(!mpls_hlen || mpls_hlen % MPLS_HLEN))
|
|
+ goto out;
|
|
if (unlikely(!pskb_may_pull(skb, mpls_hlen)))
|
|
goto out;
|
|
|
|
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
|
|
index b51872b9dd619..056846eb2e5bd 100644
|
|
--- a/net/mptcp/protocol.c
|
|
+++ b/net/mptcp/protocol.c
|
|
@@ -114,11 +114,7 @@ static int __mptcp_socket_create(struct mptcp_sock *msk)
|
|
list_add(&subflow->node, &msk->conn_list);
|
|
sock_hold(ssock->sk);
|
|
subflow->request_mptcp = 1;
|
|
-
|
|
- /* accept() will wait on first subflow sk_wq, and we always wakes up
|
|
- * via msk->sk_socket
|
|
- */
|
|
- RCU_INIT_POINTER(msk->first->sk_wq, &sk->sk_socket->wq);
|
|
+ mptcp_sock_graft(msk->first, sk->sk_socket);
|
|
|
|
return 0;
|
|
}
|
|
@@ -1180,6 +1176,7 @@ static bool mptcp_tx_cache_refill(struct sock *sk, int size,
|
|
*/
|
|
while (skbs->qlen > 1) {
|
|
skb = __skb_dequeue_tail(skbs);
|
|
+ *total_ts -= skb->truesize;
|
|
__kfree_skb(skb);
|
|
}
|
|
return skbs->qlen > 0;
|
|
@@ -2114,8 +2111,7 @@ static struct sock *mptcp_subflow_get_retrans(const struct mptcp_sock *msk)
|
|
void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
|
|
struct mptcp_subflow_context *subflow)
|
|
{
|
|
- bool dispose_socket = false;
|
|
- struct socket *sock;
|
|
+ struct mptcp_sock *msk = mptcp_sk(sk);
|
|
|
|
list_del(&subflow->node);
|
|
|
|
@@ -2124,11 +2120,8 @@ void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
|
|
/* if we are invoked by the msk cleanup code, the subflow is
|
|
* already orphaned
|
|
*/
|
|
- sock = ssk->sk_socket;
|
|
- if (sock) {
|
|
- dispose_socket = sock != sk->sk_socket;
|
|
+ if (ssk->sk_socket)
|
|
sock_orphan(ssk);
|
|
- }
|
|
|
|
subflow->disposable = 1;
|
|
|
|
@@ -2146,10 +2139,11 @@ void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
|
|
__sock_put(ssk);
|
|
}
|
|
release_sock(ssk);
|
|
- if (dispose_socket)
|
|
- iput(SOCK_INODE(sock));
|
|
|
|
sock_put(ssk);
|
|
+
|
|
+ if (ssk == msk->last_snd)
|
|
+ msk->last_snd = NULL;
|
|
}
|
|
|
|
static unsigned int mptcp_sync_mss(struct sock *sk, u32 pmtu)
|
|
@@ -2535,6 +2529,12 @@ static void __mptcp_destroy_sock(struct sock *sk)
|
|
|
|
pr_debug("msk=%p", msk);
|
|
|
|
+ /* dispose the ancillatory tcp socket, if any */
|
|
+ if (msk->subflow) {
|
|
+ iput(SOCK_INODE(msk->subflow));
|
|
+ msk->subflow = NULL;
|
|
+ }
|
|
+
|
|
/* be sure to always acquire the join list lock, to sync vs
|
|
* mptcp_finish_join().
|
|
*/
|
|
@@ -2585,20 +2585,10 @@ cleanup:
|
|
inet_csk(sk)->icsk_mtup.probe_timestamp = tcp_jiffies32;
|
|
list_for_each_entry(subflow, &mptcp_sk(sk)->conn_list, node) {
|
|
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
|
|
- bool slow, dispose_socket;
|
|
- struct socket *sock;
|
|
+ bool slow = lock_sock_fast(ssk);
|
|
|
|
- slow = lock_sock_fast(ssk);
|
|
- sock = ssk->sk_socket;
|
|
- dispose_socket = sock && sock != sk->sk_socket;
|
|
sock_orphan(ssk);
|
|
unlock_sock_fast(ssk, slow);
|
|
-
|
|
- /* for the outgoing subflows we additionally need to free
|
|
- * the associated socket
|
|
- */
|
|
- if (dispose_socket)
|
|
- iput(SOCK_INODE(sock));
|
|
}
|
|
sock_orphan(sk);
|
|
|
|
@@ -3040,7 +3030,7 @@ void mptcp_finish_connect(struct sock *ssk)
|
|
mptcp_rcv_space_init(msk, ssk);
|
|
}
|
|
|
|
-static void mptcp_sock_graft(struct sock *sk, struct socket *parent)
|
|
+void mptcp_sock_graft(struct sock *sk, struct socket *parent)
|
|
{
|
|
write_lock_bh(&sk->sk_callback_lock);
|
|
rcu_assign_pointer(sk->sk_wq, &parent->wq);
|
|
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
|
|
index d6ca1a5b94fc0..18fef4273bdc6 100644
|
|
--- a/net/mptcp/protocol.h
|
|
+++ b/net/mptcp/protocol.h
|
|
@@ -50,14 +50,15 @@
|
|
#define TCPOLEN_MPTCP_DSS_MAP64 14
|
|
#define TCPOLEN_MPTCP_DSS_CHECKSUM 2
|
|
#define TCPOLEN_MPTCP_ADD_ADDR 16
|
|
-#define TCPOLEN_MPTCP_ADD_ADDR_PORT 20
|
|
+#define TCPOLEN_MPTCP_ADD_ADDR_PORT 18
|
|
#define TCPOLEN_MPTCP_ADD_ADDR_BASE 8
|
|
-#define TCPOLEN_MPTCP_ADD_ADDR_BASE_PORT 12
|
|
+#define TCPOLEN_MPTCP_ADD_ADDR_BASE_PORT 10
|
|
#define TCPOLEN_MPTCP_ADD_ADDR6 28
|
|
-#define TCPOLEN_MPTCP_ADD_ADDR6_PORT 32
|
|
+#define TCPOLEN_MPTCP_ADD_ADDR6_PORT 30
|
|
#define TCPOLEN_MPTCP_ADD_ADDR6_BASE 20
|
|
-#define TCPOLEN_MPTCP_ADD_ADDR6_BASE_PORT 24
|
|
-#define TCPOLEN_MPTCP_PORT_LEN 4
|
|
+#define TCPOLEN_MPTCP_ADD_ADDR6_BASE_PORT 22
|
|
+#define TCPOLEN_MPTCP_PORT_LEN 2
|
|
+#define TCPOLEN_MPTCP_PORT_ALIGN 2
|
|
#define TCPOLEN_MPTCP_RM_ADDR_BASE 4
|
|
#define TCPOLEN_MPTCP_FASTCLOSE 12
|
|
|
|
@@ -459,6 +460,7 @@ void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how);
|
|
void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
|
|
struct mptcp_subflow_context *subflow);
|
|
void mptcp_subflow_reset(struct sock *ssk);
|
|
+void mptcp_sock_graft(struct sock *sk, struct socket *parent);
|
|
|
|
/* called with sk socket lock held */
|
|
int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
|
|
@@ -587,8 +589,9 @@ static inline unsigned int mptcp_add_addr_len(int family, bool echo, bool port)
|
|
len = TCPOLEN_MPTCP_ADD_ADDR6_BASE;
|
|
if (!echo)
|
|
len += MPTCPOPT_THMAC_LEN;
|
|
+ /* account for 2 trailing 'nop' options */
|
|
if (port)
|
|
- len += TCPOLEN_MPTCP_PORT_LEN;
|
|
+ len += TCPOLEN_MPTCP_PORT_LEN + TCPOLEN_MPTCP_PORT_ALIGN;
|
|
|
|
return len;
|
|
}
|
|
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
|
|
index 9d28f6e3dc49a..c3090003a17bd 100644
|
|
--- a/net/mptcp/subflow.c
|
|
+++ b/net/mptcp/subflow.c
|
|
@@ -1165,12 +1165,16 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
|
|
if (err && err != -EINPROGRESS)
|
|
goto failed_unlink;
|
|
|
|
+ /* discard the subflow socket */
|
|
+ mptcp_sock_graft(ssk, sk->sk_socket);
|
|
+ iput(SOCK_INODE(sf));
|
|
return err;
|
|
|
|
failed_unlink:
|
|
spin_lock_bh(&msk->join_list_lock);
|
|
list_del(&subflow->node);
|
|
spin_unlock_bh(&msk->join_list_lock);
|
|
+ sock_put(mptcp_subflow_tcp_sock(subflow));
|
|
|
|
failed:
|
|
subflow->disposable = 1;
|
|
diff --git a/net/netfilter/nf_nat_proto.c b/net/netfilter/nf_nat_proto.c
|
|
index e87b6bd6b3cdb..4731d21fc3ad8 100644
|
|
--- a/net/netfilter/nf_nat_proto.c
|
|
+++ b/net/netfilter/nf_nat_proto.c
|
|
@@ -646,8 +646,8 @@ nf_nat_ipv4_fn(void *priv, struct sk_buff *skb,
|
|
}
|
|
|
|
static unsigned int
|
|
-nf_nat_ipv4_in(void *priv, struct sk_buff *skb,
|
|
- const struct nf_hook_state *state)
|
|
+nf_nat_ipv4_pre_routing(void *priv, struct sk_buff *skb,
|
|
+ const struct nf_hook_state *state)
|
|
{
|
|
unsigned int ret;
|
|
__be32 daddr = ip_hdr(skb)->daddr;
|
|
@@ -659,6 +659,23 @@ nf_nat_ipv4_in(void *priv, struct sk_buff *skb,
|
|
return ret;
|
|
}
|
|
|
|
+static unsigned int
|
|
+nf_nat_ipv4_local_in(void *priv, struct sk_buff *skb,
|
|
+ const struct nf_hook_state *state)
|
|
+{
|
|
+ __be32 saddr = ip_hdr(skb)->saddr;
|
|
+ struct sock *sk = skb->sk;
|
|
+ unsigned int ret;
|
|
+
|
|
+ ret = nf_nat_ipv4_fn(priv, skb, state);
|
|
+
|
|
+ if (ret == NF_ACCEPT && sk && saddr != ip_hdr(skb)->saddr &&
|
|
+ !inet_sk_transparent(sk))
|
|
+ skb_orphan(skb); /* TCP edemux obtained wrong socket */
|
|
+
|
|
+ return ret;
|
|
+}
|
|
+
|
|
static unsigned int
|
|
nf_nat_ipv4_out(void *priv, struct sk_buff *skb,
|
|
const struct nf_hook_state *state)
|
|
@@ -736,7 +753,7 @@ nf_nat_ipv4_local_fn(void *priv, struct sk_buff *skb,
|
|
static const struct nf_hook_ops nf_nat_ipv4_ops[] = {
|
|
/* Before packet filtering, change destination */
|
|
{
|
|
- .hook = nf_nat_ipv4_in,
|
|
+ .hook = nf_nat_ipv4_pre_routing,
|
|
.pf = NFPROTO_IPV4,
|
|
.hooknum = NF_INET_PRE_ROUTING,
|
|
.priority = NF_IP_PRI_NAT_DST,
|
|
@@ -757,7 +774,7 @@ static const struct nf_hook_ops nf_nat_ipv4_ops[] = {
|
|
},
|
|
/* After packet filtering, change source */
|
|
{
|
|
- .hook = nf_nat_ipv4_fn,
|
|
+ .hook = nf_nat_ipv4_local_in,
|
|
.pf = NFPROTO_IPV4,
|
|
.hooknum = NF_INET_LOCAL_IN,
|
|
.priority = NF_IP_PRI_NAT_SRC,
|
|
diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
|
|
index acce622582e3d..bce6ca203d462 100644
|
|
--- a/net/netfilter/x_tables.c
|
|
+++ b/net/netfilter/x_tables.c
|
|
@@ -330,6 +330,7 @@ static int match_revfn(u8 af, const char *name, u8 revision, int *bestp)
|
|
const struct xt_match *m;
|
|
int have_rev = 0;
|
|
|
|
+ mutex_lock(&xt[af].mutex);
|
|
list_for_each_entry(m, &xt[af].match, list) {
|
|
if (strcmp(m->name, name) == 0) {
|
|
if (m->revision > *bestp)
|
|
@@ -338,6 +339,7 @@ static int match_revfn(u8 af, const char *name, u8 revision, int *bestp)
|
|
have_rev = 1;
|
|
}
|
|
}
|
|
+ mutex_unlock(&xt[af].mutex);
|
|
|
|
if (af != NFPROTO_UNSPEC && !have_rev)
|
|
return match_revfn(NFPROTO_UNSPEC, name, revision, bestp);
|
|
@@ -350,6 +352,7 @@ static int target_revfn(u8 af, const char *name, u8 revision, int *bestp)
|
|
const struct xt_target *t;
|
|
int have_rev = 0;
|
|
|
|
+ mutex_lock(&xt[af].mutex);
|
|
list_for_each_entry(t, &xt[af].target, list) {
|
|
if (strcmp(t->name, name) == 0) {
|
|
if (t->revision > *bestp)
|
|
@@ -358,6 +361,7 @@ static int target_revfn(u8 af, const char *name, u8 revision, int *bestp)
|
|
have_rev = 1;
|
|
}
|
|
}
|
|
+ mutex_unlock(&xt[af].mutex);
|
|
|
|
if (af != NFPROTO_UNSPEC && !have_rev)
|
|
return target_revfn(NFPROTO_UNSPEC, name, revision, bestp);
|
|
@@ -371,12 +375,10 @@ int xt_find_revision(u8 af, const char *name, u8 revision, int target,
|
|
{
|
|
int have_rev, best = -1;
|
|
|
|
- mutex_lock(&xt[af].mutex);
|
|
if (target == 1)
|
|
have_rev = target_revfn(af, name, revision, &best);
|
|
else
|
|
have_rev = match_revfn(af, name, revision, &best);
|
|
- mutex_unlock(&xt[af].mutex);
|
|
|
|
/* Nothing at all? Return 0 to try loading module. */
|
|
if (best == -1) {
|
|
diff --git a/net/netlabel/netlabel_cipso_v4.c b/net/netlabel/netlabel_cipso_v4.c
|
|
index 726dda95934c6..4f50a64315cf0 100644
|
|
--- a/net/netlabel/netlabel_cipso_v4.c
|
|
+++ b/net/netlabel/netlabel_cipso_v4.c
|
|
@@ -575,6 +575,7 @@ list_start:
|
|
|
|
break;
|
|
}
|
|
+ cipso_v4_doi_putdef(doi_def);
|
|
rcu_read_unlock();
|
|
|
|
genlmsg_end(ans_skb, data);
|
|
@@ -583,12 +584,14 @@ list_start:
|
|
list_retry:
|
|
/* XXX - this limit is a guesstimate */
|
|
if (nlsze_mult < 4) {
|
|
+ cipso_v4_doi_putdef(doi_def);
|
|
rcu_read_unlock();
|
|
kfree_skb(ans_skb);
|
|
nlsze_mult *= 2;
|
|
goto list_start;
|
|
}
|
|
list_failure_lock:
|
|
+ cipso_v4_doi_putdef(doi_def);
|
|
rcu_read_unlock();
|
|
list_failure:
|
|
kfree_skb(ans_skb);
|
|
diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
|
|
index b34358282f379..ac2a4a7711da4 100644
|
|
--- a/net/qrtr/qrtr.c
|
|
+++ b/net/qrtr/qrtr.c
|
|
@@ -958,8 +958,10 @@ static int qrtr_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
|
|
plen = (len + 3) & ~3;
|
|
skb = sock_alloc_send_skb(sk, plen + QRTR_HDR_MAX_SIZE,
|
|
msg->msg_flags & MSG_DONTWAIT, &rc);
|
|
- if (!skb)
|
|
+ if (!skb) {
|
|
+ rc = -ENOMEM;
|
|
goto out_node;
|
|
+ }
|
|
|
|
skb_reserve(skb, QRTR_HDR_MAX_SIZE);
|
|
|
|
diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
|
|
index 6fe4e5cc807c9..5f90ee76fd416 100644
|
|
--- a/net/sched/sch_api.c
|
|
+++ b/net/sched/sch_api.c
|
|
@@ -2167,7 +2167,7 @@ static int tc_dump_tclass_qdisc(struct Qdisc *q, struct sk_buff *skb,
|
|
|
|
static int tc_dump_tclass_root(struct Qdisc *root, struct sk_buff *skb,
|
|
struct tcmsg *tcm, struct netlink_callback *cb,
|
|
- int *t_p, int s_t)
|
|
+ int *t_p, int s_t, bool recur)
|
|
{
|
|
struct Qdisc *q;
|
|
int b;
|
|
@@ -2178,7 +2178,7 @@ static int tc_dump_tclass_root(struct Qdisc *root, struct sk_buff *skb,
|
|
if (tc_dump_tclass_qdisc(root, skb, tcm, cb, t_p, s_t) < 0)
|
|
return -1;
|
|
|
|
- if (!qdisc_dev(root))
|
|
+ if (!qdisc_dev(root) || !recur)
|
|
return 0;
|
|
|
|
if (tcm->tcm_parent) {
|
|
@@ -2213,13 +2213,13 @@ static int tc_dump_tclass(struct sk_buff *skb, struct netlink_callback *cb)
|
|
s_t = cb->args[0];
|
|
t = 0;
|
|
|
|
- if (tc_dump_tclass_root(dev->qdisc, skb, tcm, cb, &t, s_t) < 0)
|
|
+ if (tc_dump_tclass_root(dev->qdisc, skb, tcm, cb, &t, s_t, true) < 0)
|
|
goto done;
|
|
|
|
dev_queue = dev_ingress_queue(dev);
|
|
if (dev_queue &&
|
|
tc_dump_tclass_root(dev_queue->qdisc_sleeping, skb, tcm, cb,
|
|
- &t, s_t) < 0)
|
|
+ &t, s_t, false) < 0)
|
|
goto done;
|
|
|
|
done:
|
|
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
|
|
index cf702a5f7fe5d..39ed0e0afe6d9 100644
|
|
--- a/net/sunrpc/sched.c
|
|
+++ b/net/sunrpc/sched.c
|
|
@@ -963,8 +963,11 @@ void rpc_execute(struct rpc_task *task)
|
|
|
|
rpc_set_active(task);
|
|
rpc_make_runnable(rpciod_workqueue, task);
|
|
- if (!is_async)
|
|
+ if (!is_async) {
|
|
+ unsigned int pflags = memalloc_nofs_save();
|
|
__rpc_execute(task);
|
|
+ memalloc_nofs_restore(pflags);
|
|
+ }
|
|
}
|
|
|
|
static void rpc_async_schedule(struct work_struct *work)
|
|
diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
|
|
index db0cb73513a58..1e2a1105d0e67 100644
|
|
--- a/samples/bpf/xdpsock_user.c
|
|
+++ b/samples/bpf/xdpsock_user.c
|
|
@@ -1699,5 +1699,7 @@ int main(int argc, char **argv)
|
|
|
|
xdpsock_cleanup();
|
|
|
|
+ munmap(bufs, NUM_FRAMES * opt_xsk_frame_size);
|
|
+
|
|
return 0;
|
|
}
|
|
diff --git a/security/commoncap.c b/security/commoncap.c
|
|
index 78598be45f101..26c1cb725dcbe 100644
|
|
--- a/security/commoncap.c
|
|
+++ b/security/commoncap.c
|
|
@@ -500,8 +500,7 @@ int cap_convert_nscap(struct dentry *dentry, const void **ivalue, size_t size)
|
|
__u32 magic, nsmagic;
|
|
struct inode *inode = d_backing_inode(dentry);
|
|
struct user_namespace *task_ns = current_user_ns(),
|
|
- *fs_ns = inode->i_sb->s_user_ns,
|
|
- *ancestor;
|
|
+ *fs_ns = inode->i_sb->s_user_ns;
|
|
kuid_t rootid;
|
|
size_t newsize;
|
|
|
|
@@ -524,15 +523,6 @@ int cap_convert_nscap(struct dentry *dentry, const void **ivalue, size_t size)
|
|
if (nsrootid == -1)
|
|
return -EINVAL;
|
|
|
|
- /*
|
|
- * Do not allow allow adding a v3 filesystem capability xattr
|
|
- * if the rootid field is ambiguous.
|
|
- */
|
|
- for (ancestor = task_ns->parent; ancestor; ancestor = ancestor->parent) {
|
|
- if (from_kuid(ancestor, rootid) == 0)
|
|
- return -EINVAL;
|
|
- }
|
|
-
|
|
newsize = sizeof(struct vfs_ns_cap_data);
|
|
nscap = kmalloc(newsize, GFP_ATOMIC);
|
|
if (!nscap)
|
|
diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c
|
|
index 6a85645663759..17a25e453f60c 100644
|
|
--- a/sound/pci/hda/hda_bind.c
|
|
+++ b/sound/pci/hda/hda_bind.c
|
|
@@ -47,6 +47,10 @@ static void hda_codec_unsol_event(struct hdac_device *dev, unsigned int ev)
|
|
if (codec->bus->shutdown)
|
|
return;
|
|
|
|
+ /* ignore unsol events during system suspend/resume */
|
|
+ if (codec->core.dev.power.power_state.event != PM_EVENT_ON)
|
|
+ return;
|
|
+
|
|
if (codec->patch_ops.unsol_event)
|
|
codec->patch_ops.unsol_event(codec, ev);
|
|
}
|
|
diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c
|
|
index 80016b7b6849e..b972d59eb1ec2 100644
|
|
--- a/sound/pci/hda/hda_controller.c
|
|
+++ b/sound/pci/hda/hda_controller.c
|
|
@@ -609,13 +609,6 @@ static int azx_pcm_open(struct snd_pcm_substream *substream)
|
|
20,
|
|
178000000);
|
|
|
|
- /* by some reason, the playback stream stalls on PulseAudio with
|
|
- * tsched=1 when a capture stream triggers. Until we figure out the
|
|
- * real cause, disable tsched mode by telling the PCM info flag.
|
|
- */
|
|
- if (chip->driver_caps & AZX_DCAPS_AMD_WORKAROUND)
|
|
- runtime->hw.info |= SNDRV_PCM_INFO_BATCH;
|
|
-
|
|
if (chip->align_buffer_size)
|
|
/* constrain buffer sizes to be multiple of 128
|
|
bytes. This is more efficient in terms of memory
|
|
diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
|
|
index 1233d4ee8a39d..253d538251ae1 100644
|
|
--- a/sound/pci/hda/hda_intel.c
|
|
+++ b/sound/pci/hda/hda_intel.c
|
|
@@ -1026,6 +1026,8 @@ static int azx_prepare(struct device *dev)
|
|
chip = card->private_data;
|
|
chip->pm_prepared = 1;
|
|
|
|
+ flush_work(&azx_bus(chip)->unsol_work);
|
|
+
|
|
/* HDA controller always requires different WAKEEN for runtime suspend
|
|
* and system suspend, so don't use direct-complete here.
|
|
*/
|
|
diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
|
|
index 7e62aed172a9f..65057a1845598 100644
|
|
--- a/sound/pci/hda/patch_ca0132.c
|
|
+++ b/sound/pci/hda/patch_ca0132.c
|
|
@@ -1309,6 +1309,7 @@ static const struct snd_pci_quirk ca0132_quirks[] = {
|
|
SND_PCI_QUIRK(0x1102, 0x0013, "Recon3D", QUIRK_R3D),
|
|
SND_PCI_QUIRK(0x1102, 0x0018, "Recon3D", QUIRK_R3D),
|
|
SND_PCI_QUIRK(0x1102, 0x0051, "Sound Blaster AE-5", QUIRK_AE5),
|
|
+ SND_PCI_QUIRK(0x1102, 0x0191, "Sound Blaster AE-5 Plus", QUIRK_AE5),
|
|
SND_PCI_QUIRK(0x1102, 0x0081, "Sound Blaster AE-7", QUIRK_AE7),
|
|
{}
|
|
};
|
|
diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
|
|
index d49cc4409d59c..a980a4eda51c9 100644
|
|
--- a/sound/pci/hda/patch_conexant.c
|
|
+++ b/sound/pci/hda/patch_conexant.c
|
|
@@ -149,6 +149,21 @@ static int cx_auto_vmaster_mute_led(struct led_classdev *led_cdev,
|
|
return 0;
|
|
}
|
|
|
|
+static void cxt_init_gpio_led(struct hda_codec *codec)
|
|
+{
|
|
+ struct conexant_spec *spec = codec->spec;
|
|
+ unsigned int mask = spec->gpio_mute_led_mask | spec->gpio_mic_led_mask;
|
|
+
|
|
+ if (mask) {
|
|
+ snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_MASK,
|
|
+ mask);
|
|
+ snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_DIRECTION,
|
|
+ mask);
|
|
+ snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_DATA,
|
|
+ spec->gpio_led);
|
|
+ }
|
|
+}
|
|
+
|
|
static int cx_auto_init(struct hda_codec *codec)
|
|
{
|
|
struct conexant_spec *spec = codec->spec;
|
|
@@ -156,6 +171,7 @@ static int cx_auto_init(struct hda_codec *codec)
|
|
if (!spec->dynamic_eapd)
|
|
cx_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, true);
|
|
|
|
+ cxt_init_gpio_led(codec);
|
|
snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_INIT);
|
|
|
|
return 0;
|
|
@@ -215,6 +231,7 @@ enum {
|
|
CXT_FIXUP_HP_SPECTRE,
|
|
CXT_FIXUP_HP_GATE_MIC,
|
|
CXT_FIXUP_MUTE_LED_GPIO,
|
|
+ CXT_FIXUP_HP_ZBOOK_MUTE_LED,
|
|
CXT_FIXUP_HEADSET_MIC,
|
|
CXT_FIXUP_HP_MIC_NO_PRESENCE,
|
|
};
|
|
@@ -654,31 +671,36 @@ static int cxt_gpio_micmute_update(struct led_classdev *led_cdev,
|
|
return 0;
|
|
}
|
|
|
|
-
|
|
-static void cxt_fixup_mute_led_gpio(struct hda_codec *codec,
|
|
- const struct hda_fixup *fix, int action)
|
|
+static void cxt_setup_mute_led(struct hda_codec *codec,
|
|
+ unsigned int mute, unsigned int mic_mute)
|
|
{
|
|
struct conexant_spec *spec = codec->spec;
|
|
- static const struct hda_verb gpio_init[] = {
|
|
- { 0x01, AC_VERB_SET_GPIO_MASK, 0x03 },
|
|
- { 0x01, AC_VERB_SET_GPIO_DIRECTION, 0x03 },
|
|
- {}
|
|
- };
|
|
|
|
- if (action == HDA_FIXUP_ACT_PRE_PROBE) {
|
|
+ spec->gpio_led = 0;
|
|
+ spec->mute_led_polarity = 0;
|
|
+ if (mute) {
|
|
snd_hda_gen_add_mute_led_cdev(codec, cxt_gpio_mute_update);
|
|
- spec->gpio_led = 0;
|
|
- spec->mute_led_polarity = 0;
|
|
- spec->gpio_mute_led_mask = 0x01;
|
|
- spec->gpio_mic_led_mask = 0x02;
|
|
+ spec->gpio_mute_led_mask = mute;
|
|
+ }
|
|
+ if (mic_mute) {
|
|
snd_hda_gen_add_micmute_led_cdev(codec, cxt_gpio_micmute_update);
|
|
+ spec->gpio_mic_led_mask = mic_mute;
|
|
}
|
|
- snd_hda_add_verbs(codec, gpio_init);
|
|
- if (spec->gpio_led)
|
|
- snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_DATA,
|
|
- spec->gpio_led);
|
|
}
|
|
|
|
+static void cxt_fixup_mute_led_gpio(struct hda_codec *codec,
|
|
+ const struct hda_fixup *fix, int action)
|
|
+{
|
|
+ if (action == HDA_FIXUP_ACT_PRE_PROBE)
|
|
+ cxt_setup_mute_led(codec, 0x01, 0x02);
|
|
+}
|
|
+
|
|
+static void cxt_fixup_hp_zbook_mute_led(struct hda_codec *codec,
|
|
+ const struct hda_fixup *fix, int action)
|
|
+{
|
|
+ if (action == HDA_FIXUP_ACT_PRE_PROBE)
|
|
+ cxt_setup_mute_led(codec, 0x10, 0x20);
|
|
+}
|
|
|
|
/* ThinkPad X200 & co with cxt5051 */
|
|
static const struct hda_pintbl cxt_pincfg_lenovo_x200[] = {
|
|
@@ -839,6 +861,10 @@ static const struct hda_fixup cxt_fixups[] = {
|
|
.type = HDA_FIXUP_FUNC,
|
|
.v.func = cxt_fixup_mute_led_gpio,
|
|
},
|
|
+ [CXT_FIXUP_HP_ZBOOK_MUTE_LED] = {
|
|
+ .type = HDA_FIXUP_FUNC,
|
|
+ .v.func = cxt_fixup_hp_zbook_mute_led,
|
|
+ },
|
|
[CXT_FIXUP_HEADSET_MIC] = {
|
|
.type = HDA_FIXUP_FUNC,
|
|
.v.func = cxt_fixup_headset_mic,
|
|
@@ -917,6 +943,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
|
|
SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
|
|
SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
|
|
SND_PCI_QUIRK(0x103c, 0x8402, "HP ProBook 645 G4", CXT_FIXUP_MUTE_LED_GPIO),
|
|
+ SND_PCI_QUIRK(0x103c, 0x8427, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED),
|
|
SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE),
|
|
SND_PCI_QUIRK(0x103c, 0x8456, "HP Z2 G4 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
|
|
SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
|
|
@@ -956,6 +983,7 @@ static const struct hda_model_fixup cxt5066_fixup_models[] = {
|
|
{ .id = CXT_FIXUP_MUTE_LED_EAPD, .name = "mute-led-eapd" },
|
|
{ .id = CXT_FIXUP_HP_DOCK, .name = "hp-dock" },
|
|
{ .id = CXT_FIXUP_MUTE_LED_GPIO, .name = "mute-led-gpio" },
|
|
+ { .id = CXT_FIXUP_HP_ZBOOK_MUTE_LED, .name = "hp-zbook-mute-led" },
|
|
{ .id = CXT_FIXUP_HP_MIC_NO_PRESENCE, .name = "hp-mic-fix" },
|
|
{}
|
|
};
|
|
diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
|
|
index e405be7929e31..d6387106619ff 100644
|
|
--- a/sound/pci/hda/patch_hdmi.c
|
|
+++ b/sound/pci/hda/patch_hdmi.c
|
|
@@ -2472,6 +2472,18 @@ static void generic_hdmi_free(struct hda_codec *codec)
|
|
}
|
|
|
|
#ifdef CONFIG_PM
|
|
+static int generic_hdmi_suspend(struct hda_codec *codec)
|
|
+{
|
|
+ struct hdmi_spec *spec = codec->spec;
|
|
+ int pin_idx;
|
|
+
|
|
+ for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) {
|
|
+ struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx);
|
|
+ cancel_delayed_work_sync(&per_pin->work);
|
|
+ }
|
|
+ return 0;
|
|
+}
|
|
+
|
|
static int generic_hdmi_resume(struct hda_codec *codec)
|
|
{
|
|
struct hdmi_spec *spec = codec->spec;
|
|
@@ -2495,6 +2507,7 @@ static const struct hda_codec_ops generic_hdmi_patch_ops = {
|
|
.build_controls = generic_hdmi_build_controls,
|
|
.unsol_event = hdmi_unsol_event,
|
|
#ifdef CONFIG_PM
|
|
+ .suspend = generic_hdmi_suspend,
|
|
.resume = generic_hdmi_resume,
|
|
#endif
|
|
};
|
|
diff --git a/sound/usb/card.c b/sound/usb/card.c
|
|
index e08fbf8e3ee0f..3007922a8ed86 100644
|
|
--- a/sound/usb/card.c
|
|
+++ b/sound/usb/card.c
|
|
@@ -831,6 +831,9 @@ static int usb_audio_probe(struct usb_interface *intf,
|
|
snd_media_device_create(chip, intf);
|
|
}
|
|
|
|
+ if (quirk)
|
|
+ chip->quirk_type = quirk->type;
|
|
+
|
|
usb_chip[chip->index] = chip;
|
|
chip->intf[chip->num_interfaces] = intf;
|
|
chip->num_interfaces++;
|
|
@@ -905,6 +908,9 @@ static void usb_audio_disconnect(struct usb_interface *intf)
|
|
}
|
|
}
|
|
|
|
+ if (chip->quirk_type & QUIRK_SETUP_DISABLE_AUTOSUSPEND)
|
|
+ usb_enable_autosuspend(interface_to_usbdev(intf));
|
|
+
|
|
chip->num_interfaces--;
|
|
if (chip->num_interfaces <= 0) {
|
|
usb_chip[chip->index] = NULL;
|
|
diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
|
|
index 737b2729c0d37..d3001fb18141f 100644
|
|
--- a/sound/usb/quirks.c
|
|
+++ b/sound/usb/quirks.c
|
|
@@ -547,7 +547,7 @@ static int setup_disable_autosuspend(struct snd_usb_audio *chip,
|
|
struct usb_driver *driver,
|
|
const struct snd_usb_audio_quirk *quirk)
|
|
{
|
|
- driver->supports_autosuspend = 0;
|
|
+ usb_disable_autosuspend(interface_to_usbdev(iface));
|
|
return 1; /* Continue with creating streams and mixer */
|
|
}
|
|
|
|
@@ -1520,6 +1520,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
|
|
case USB_ID(0x1901, 0x0191): /* GE B850V3 CP2114 audio interface */
|
|
case USB_ID(0x21b4, 0x0081): /* AudioQuest DragonFly */
|
|
case USB_ID(0x2912, 0x30c8): /* Audioengine D1 */
|
|
+ case USB_ID(0x413c, 0xa506): /* Dell AE515 sound bar */
|
|
return true;
|
|
}
|
|
|
|
@@ -1670,6 +1671,14 @@ void snd_usb_ctl_msg_quirk(struct usb_device *dev, unsigned int pipe,
|
|
&& (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
|
|
msleep(20);
|
|
|
|
+ /*
|
|
+ * Plantronics headsets (C320, C320-M, etc) need a delay to avoid
|
|
+ * random microhpone failures.
|
|
+ */
|
|
+ if (USB_ID_VENDOR(chip->usb_id) == 0x047f &&
|
|
+ (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
|
|
+ msleep(20);
|
|
+
|
|
/* Zoom R16/24, many Logitech(at least H650e/H570e/BCC950),
|
|
* Jabra 550a, Kingston HyperX needs a tiny delay here,
|
|
* otherwise requests like get/set frequency return
|
|
diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
|
|
index 215c1771dd570..60b9dd7df6bb7 100644
|
|
--- a/sound/usb/usbaudio.h
|
|
+++ b/sound/usb/usbaudio.h
|
|
@@ -27,6 +27,7 @@ struct snd_usb_audio {
|
|
struct snd_card *card;
|
|
struct usb_interface *intf[MAX_CARD_INTERFACES];
|
|
u32 usb_id;
|
|
+ uint16_t quirk_type;
|
|
struct mutex mutex;
|
|
unsigned int system_suspend;
|
|
atomic_t active;
|
|
diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
|
|
index 7409d7860aa6c..80d966cfcaa14 100644
|
|
--- a/tools/bpf/resolve_btfids/main.c
|
|
+++ b/tools/bpf/resolve_btfids/main.c
|
|
@@ -260,6 +260,11 @@ static struct btf_id *add_symbol(struct rb_root *root, char *name, size_t size)
|
|
return btf_id__add(root, id, false);
|
|
}
|
|
|
|
+/* Older libelf.h and glibc elf.h might not yet define the ELF compression types. */
|
|
+#ifndef SHF_COMPRESSED
|
|
+#define SHF_COMPRESSED (1 << 11) /* Section with compressed data. */
|
|
+#endif
|
|
+
|
|
/*
|
|
* The data of compressed section should be aligned to 4
|
|
* (for 32bit) or 8 (for 64 bit) bytes. The binutils ld
|
|
diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
|
|
index e3e41ceeb1bc6..06746d96742f3 100644
|
|
--- a/tools/lib/bpf/xsk.c
|
|
+++ b/tools/lib/bpf/xsk.c
|
|
@@ -535,15 +535,16 @@ static int xsk_lookup_bpf_maps(struct xsk_socket *xsk)
|
|
if (fd < 0)
|
|
continue;
|
|
|
|
+ memset(&map_info, 0, map_len);
|
|
err = bpf_obj_get_info_by_fd(fd, &map_info, &map_len);
|
|
if (err) {
|
|
close(fd);
|
|
continue;
|
|
}
|
|
|
|
- if (!strcmp(map_info.name, "xsks_map")) {
|
|
+ if (!strncmp(map_info.name, "xsks_map", sizeof(map_info.name))) {
|
|
ctx->xsks_map_fd = fd;
|
|
- continue;
|
|
+ break;
|
|
}
|
|
|
|
close(fd);
|
|
diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
|
|
index 62f3deb1d3a8b..e41a8f9b99d2d 100644
|
|
--- a/tools/perf/Makefile.perf
|
|
+++ b/tools/perf/Makefile.perf
|
|
@@ -600,7 +600,7 @@ arch_errno_hdr_dir := $(srctree)/tools
|
|
arch_errno_tbl := $(srctree)/tools/perf/trace/beauty/arch_errno_names.sh
|
|
|
|
$(arch_errno_name_array): $(arch_errno_tbl)
|
|
- $(Q)$(SHELL) '$(arch_errno_tbl)' $(firstword $(CC)) $(arch_errno_hdr_dir) > $@
|
|
+ $(Q)$(SHELL) '$(arch_errno_tbl)' '$(patsubst -%,,$(CC))' $(arch_errno_hdr_dir) > $@
|
|
|
|
sync_file_range_arrays := $(beauty_outdir)/sync_file_range_arrays.c
|
|
sync_file_range_tbls := $(srctree)/tools/perf/trace/beauty/sync_file_range.sh
|
|
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
|
|
index 80907bc32683a..f564b210d7614 100644
|
|
--- a/tools/perf/util/sort.c
|
|
+++ b/tools/perf/util/sort.c
|
|
@@ -3033,7 +3033,7 @@ int output_field_add(struct perf_hpp_list *list, char *tok)
|
|
if (strncasecmp(tok, sd->name, strlen(tok)))
|
|
continue;
|
|
|
|
- if (sort__mode != SORT_MODE__MEMORY)
|
|
+ if (sort__mode != SORT_MODE__BRANCH)
|
|
return -EINVAL;
|
|
|
|
return __sort_dimension__add_output(list, sd);
|
|
@@ -3045,7 +3045,7 @@ int output_field_add(struct perf_hpp_list *list, char *tok)
|
|
if (strncasecmp(tok, sd->name, strlen(tok)))
|
|
continue;
|
|
|
|
- if (sort__mode != SORT_MODE__BRANCH)
|
|
+ if (sort__mode != SORT_MODE__MEMORY)
|
|
return -EINVAL;
|
|
|
|
return __sort_dimension__add_output(list, sd);
|
|
diff --git a/tools/perf/util/trace-event-read.c b/tools/perf/util/trace-event-read.c
|
|
index f507dff713c9f..8a01af783310a 100644
|
|
--- a/tools/perf/util/trace-event-read.c
|
|
+++ b/tools/perf/util/trace-event-read.c
|
|
@@ -361,6 +361,7 @@ static int read_saved_cmdline(struct tep_handle *pevent)
|
|
pr_debug("error reading saved cmdlines\n");
|
|
goto out;
|
|
}
|
|
+ buf[ret] = '\0';
|
|
|
|
parse_saved_cmdline(pevent, buf, size);
|
|
ret = 0;
|
|
diff --git a/tools/testing/selftests/bpf/progs/netif_receive_skb.c b/tools/testing/selftests/bpf/progs/netif_receive_skb.c
|
|
index 6b670039ea679..1d8918dfbd3ff 100644
|
|
--- a/tools/testing/selftests/bpf/progs/netif_receive_skb.c
|
|
+++ b/tools/testing/selftests/bpf/progs/netif_receive_skb.c
|
|
@@ -16,6 +16,13 @@ bool skip = false;
|
|
#define STRSIZE 2048
|
|
#define EXPECTED_STRSIZE 256
|
|
|
|
+#if defined(bpf_target_s390)
|
|
+/* NULL points to a readable struct lowcore on s390, so take the last page */
|
|
+#define BADPTR ((void *)0xFFFFFFFFFFFFF000ULL)
|
|
+#else
|
|
+#define BADPTR 0
|
|
+#endif
|
|
+
|
|
#ifndef ARRAY_SIZE
|
|
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
|
|
#endif
|
|
@@ -113,11 +120,11 @@ int BPF_PROG(trace_netif_receive_skb, struct sk_buff *skb)
|
|
}
|
|
|
|
/* Check invalid ptr value */
|
|
- p.ptr = 0;
|
|
+ p.ptr = BADPTR;
|
|
__ret = bpf_snprintf_btf(str, STRSIZE, &p, sizeof(p), 0);
|
|
if (__ret >= 0) {
|
|
- bpf_printk("printing NULL should generate error, got (%d)",
|
|
- __ret);
|
|
+ bpf_printk("printing %llx should generate error, got (%d)",
|
|
+ (unsigned long long)BADPTR, __ret);
|
|
ret = -ERANGE;
|
|
}
|
|
|
|
diff --git a/tools/testing/selftests/bpf/progs/test_tunnel_kern.c b/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
|
|
index a621b58ab079d..9afe947cfae95 100644
|
|
--- a/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
|
|
+++ b/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
|
|
@@ -446,10 +446,8 @@ int _geneve_get_tunnel(struct __sk_buff *skb)
|
|
}
|
|
|
|
ret = bpf_skb_get_tunnel_opt(skb, &gopt, sizeof(gopt));
|
|
- if (ret < 0) {
|
|
- ERROR(ret);
|
|
- return TC_ACT_SHOT;
|
|
- }
|
|
+ if (ret < 0)
|
|
+ gopt.opt_class = 0;
|
|
|
|
bpf_trace_printk(fmt, sizeof(fmt),
|
|
key.tunnel_id, key.remote_ipv4, gopt.opt_class);
|
|
diff --git a/tools/testing/selftests/bpf/verifier/array_access.c b/tools/testing/selftests/bpf/verifier/array_access.c
|
|
index bed53b561e044..1b138cd2b187d 100644
|
|
--- a/tools/testing/selftests/bpf/verifier/array_access.c
|
|
+++ b/tools/testing/selftests/bpf/verifier/array_access.c
|
|
@@ -250,12 +250,13 @@
|
|
BPF_MOV64_IMM(BPF_REG_5, 0),
|
|
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
|
|
BPF_FUNC_csum_diff),
|
|
+ BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xffff),
|
|
BPF_EXIT_INSN(),
|
|
},
|
|
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
|
.fixup_map_array_ro = { 3 },
|
|
.result = ACCEPT,
|
|
- .retval = -29,
|
|
+ .retval = 65507,
|
|
},
|
|
{
|
|
"invalid write map access into a read-only array 1",
|
|
diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
|
|
index 197e769c2ed16..f8cda822c1cec 100755
|
|
--- a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
|
|
+++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
|
|
@@ -86,11 +86,20 @@ test_ip6gretap()
|
|
|
|
test_gretap_stp()
|
|
{
|
|
+ # Sometimes after mirror installation, the neighbor's state is not valid.
|
|
+ # The reason is that there is no SW datapath activity related to the
|
|
+ # neighbor for the remote GRE address. Therefore whether the corresponding
|
|
+ # neighbor will be valid is a matter of luck, and the test is thus racy.
|
|
+ # Set the neighbor's state to permanent, so it would be always valid.
|
|
+ ip neigh replace 192.0.2.130 lladdr $(mac_get $h3) \
|
|
+ nud permanent dev br2
|
|
full_test_span_gre_stp gt4 $swp3.555 "mirror to gretap"
|
|
}
|
|
|
|
test_ip6gretap_stp()
|
|
{
|
|
+ ip neigh replace 2001:db8:2::2 lladdr $(mac_get $h3) \
|
|
+ nud permanent dev br2
|
|
full_test_span_gre_stp gt6 $swp3.555 "mirror to ip6gretap"
|
|
}
|
|
|