| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104 |
- From 650929f1bdce50bab031b0886ae91d459edcd18e Mon Sep 17 00:00:00 2001
- From: Andrey Ryabinin <[email protected]>
- Date: Thu, 28 Dec 2017 19:06:20 +0300
- Subject: [PATCH 236/242] x86/mm: Set MODULES_END to 0xffffffffff000000
- MIME-Version: 1.0
- Content-Type: text/plain; charset=UTF-8
- Content-Transfer-Encoding: 8bit
- commit f5a40711fa58f1c109165a4fec6078bf2dfd2bdc upstream.
- Since f06bdd4001c2 ("x86/mm: Adapt MODULES_END based on fixmap section size")
- kasan_mem_to_shadow(MODULES_END) could be not aligned to a page boundary.
- So passing page unaligned address to kasan_populate_zero_shadow() have two
- possible effects:
- 1) It may leave one page hole in supposed to be populated area. After commit
- 21506525fb8d ("x86/kasan/64: Teach KASAN about the cpu_entry_area") that
- hole happens to be in the shadow covering fixmap area and leads to crash:
- BUG: unable to handle kernel paging request at fffffbffffe8ee04
- RIP: 0010:check_memory_region+0x5c/0x190
- Call Trace:
- <NMI>
- memcpy+0x1f/0x50
- ghes_copy_tofrom_phys+0xab/0x180
- ghes_read_estatus+0xfb/0x280
- ghes_notify_nmi+0x2b2/0x410
- nmi_handle+0x115/0x2c0
- default_do_nmi+0x57/0x110
- do_nmi+0xf8/0x150
- end_repeat_nmi+0x1a/0x1e
- Note, the crash likely disappeared after commit 92a0f81d8957, which
- changed kasan_populate_zero_shadow() call the way it was before
- commit 21506525fb8d.
- 2) Attempt to load module near MODULES_END will fail, because
- __vmalloc_node_range() called from kasan_module_alloc() will hit the
- WARN_ON(!pte_none(*pte)) in the vmap_pte_range() and bail out with error.
- To fix this we need to make kasan_mem_to_shadow(MODULES_END) page aligned
- which means that MODULES_END should be 8*PAGE_SIZE aligned.
- The whole point of commit f06bdd4001c2 was to move MODULES_END down if
- NR_CPUS is big, so the cpu_entry_area takes a lot of space.
- But since 92a0f81d8957 ("x86/cpu_entry_area: Move it out of the fixmap")
- the cpu_entry_area is no longer in fixmap, so we could just set
- MODULES_END to a fixed 8*PAGE_SIZE aligned address.
- Fixes: f06bdd4001c2 ("x86/mm: Adapt MODULES_END based on fixmap section size")
- Reported-by: Jakub Kicinski <[email protected]>
- Signed-off-by: Andrey Ryabinin <[email protected]>
- Signed-off-by: Thomas Gleixner <[email protected]>
- Cc: Andy Lutomirski <[email protected]>
- Cc: Thomas Garnier <[email protected]>
- Link: https://lkml.kernel.org/r/[email protected]
- Signed-off-by: Greg Kroah-Hartman <[email protected]>
- Signed-off-by: Fabian Grünbichler <[email protected]>
- ---
- Documentation/x86/x86_64/mm.txt | 5 +----
- arch/x86/include/asm/pgtable_64_types.h | 2 +-
- 2 files changed, 2 insertions(+), 5 deletions(-)
- diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
- index ad41b3813f0a..ddd5ffd31bd0 100644
- --- a/Documentation/x86/x86_64/mm.txt
- +++ b/Documentation/x86/x86_64/mm.txt
- @@ -43,7 +43,7 @@ ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
- ffffffef00000000 - fffffffeffffffff (=64 GB) EFI region mapping space
- ... unused hole ...
- ffffffff80000000 - ffffffff9fffffff (=512 MB) kernel text mapping, from phys 0
- -ffffffffa0000000 - [fixmap start] (~1526 MB) module mapping space
- +ffffffffa0000000 - fffffffffeffffff (1520 MB) module mapping space
- [fixmap start] - ffffffffff5fffff kernel-internal fixmap range
- ffffffffff600000 - ffffffffff600fff (=4 kB) legacy vsyscall ABI
- ffffffffffe00000 - ffffffffffffffff (=2 MB) unused hole
- @@ -67,9 +67,6 @@ memory window (this size is arbitrary, it can be raised later if needed).
- The mappings are not part of any other kernel PGD and are only available
- during EFI runtime calls.
-
- -The module mapping space size changes based on the CONFIG requirements for the
- -following fixmap section.
- -
- Note that if CONFIG_RANDOMIZE_MEMORY is enabled, the direct mapping of all
- physical memory, vmalloc/ioremap space and virtual memory map are randomized.
- Their order is preserved but their base will be offset early at boot time.
- diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
- index e8a809ee0bb6..c92bd73b1e46 100644
- --- a/arch/x86/include/asm/pgtable_64_types.h
- +++ b/arch/x86/include/asm/pgtable_64_types.h
- @@ -103,7 +103,7 @@ typedef struct { pteval_t pte; } pte_t;
-
- #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE)
- /* The module sections ends with the start of the fixmap */
- -#define MODULES_END __fix_to_virt(__end_of_fixed_addresses + 1)
- +#define MODULES_END _AC(0xffffffffff000000, UL)
- #define MODULES_LEN (MODULES_END - MODULES_VADDR)
-
- #define ESPFIX_PGD_ENTRY _AC(-2, UL)
- --
- 2.14.2
|