CVE-2017-12188_0002-KVM-MMU-always-terminate-page-walks-at-level-1.patch 3.3 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788
  1. From ebe182a7c6221878cbb5d03e1eafa8002494f8cb Mon Sep 17 00:00:00 2001
  2. From: Ladi Prosek <[email protected]>
  3. Date: Tue, 10 Oct 2017 17:30:59 +0200
  4. Subject: [CVE-2017-12188 2/2] KVM: MMU: always terminate page walks at level 1
  5. MIME-Version: 1.0
  6. Content-Type: text/plain; charset=UTF-8
  7. Content-Transfer-Encoding: 8bit
  8. is_last_gpte() is not equivalent to the pseudo-code given in commit
  9. 6bb69c9b69c31 ("KVM: MMU: simplify last_pte_bitmap") because an incorrect
  10. value of last_nonleaf_level may override the result even if level == 1.
  11. It is critical for is_last_gpte() to return true on level == 1 to
  12. terminate page walks. Otherwise memory corruption may occur as level
  13. is used as an index to various data structures throughout the page
  14. walking code. Even though the actual bug would be wherever the MMU is
  15. initialized (as in the previous patch), be defensive and ensure here
  16. that is_last_gpte() returns the correct value.
  17. This patch is also enough to fix CVE-2017-12188, and suggested for
  18. stable and distro kernels.
  19. Fixes: 6bb69c9b69c315200ddc2bc79aee14c0184cf5b2
  20. Cc: [email protected]
  21. Cc: Andy Honig <[email protected]>
  22. Signed-off-by: Ladi Prosek <[email protected]>
  23. [Panic if walk_addr_generic gets an incorrect level; this is a serious
  24. bug and it's not worth a WARN_ON where the recovery path might hide
  25. further exploitable issues; suggested by Andrew Honig. - Paolo]
  26. Signed-off-by: Paolo Bonzini <[email protected]>
  27. Signed-off-by: Fabian Grünbichler <[email protected]>
  28. ---
  29. arch/x86/kvm/paging_tmpl.h | 3 ++-
  30. arch/x86/kvm/mmu.c | 14 +++++++-------
  31. 2 files changed, 9 insertions(+), 8 deletions(-)
  32. diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
  33. index b0454c7e4cff..da06dc8c4fc4 100644
  34. --- a/arch/x86/kvm/paging_tmpl.h
  35. +++ b/arch/x86/kvm/paging_tmpl.h
  36. @@ -334,10 +334,11 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
  37. --walker->level;
  38. index = PT_INDEX(addr, walker->level);
  39. -
  40. table_gfn = gpte_to_gfn(pte);
  41. offset = index * sizeof(pt_element_t);
  42. pte_gpa = gfn_to_gpa(table_gfn) + offset;
  43. +
  44. + BUG_ON(walker->level < 1);
  45. walker->table_gfn[walker->level - 1] = table_gfn;
  46. walker->pte_gpa[walker->level - 1] = pte_gpa;
  47. diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
  48. index ca0112742343..2e4a6732aaa9 100644
  49. --- a/arch/x86/kvm/mmu.c
  50. +++ b/arch/x86/kvm/mmu.c
  51. @@ -3934,13 +3934,6 @@ static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn,
  52. static inline bool is_last_gpte(struct kvm_mmu *mmu,
  53. unsigned level, unsigned gpte)
  54. {
  55. - /*
  56. - * PT_PAGE_TABLE_LEVEL always terminates. The RHS has bit 7 set
  57. - * iff level <= PT_PAGE_TABLE_LEVEL, which for our purpose means
  58. - * level == PT_PAGE_TABLE_LEVEL; set PT_PAGE_SIZE_MASK in gpte then.
  59. - */
  60. - gpte |= level - PT_PAGE_TABLE_LEVEL - 1;
  61. -
  62. /*
  63. * The RHS has bit 7 set iff level < mmu->last_nonleaf_level.
  64. * If it is clear, there are no large pages at this level, so clear
  65. @@ -3948,6 +3941,13 @@ static inline bool is_last_gpte(struct kvm_mmu *mmu,
  66. */
  67. gpte &= level - mmu->last_nonleaf_level;
  68. + /*
  69. + * PT_PAGE_TABLE_LEVEL always terminates. The RHS has bit 7 set
  70. + * iff level <= PT_PAGE_TABLE_LEVEL, which for our purpose means
  71. + * level == PT_PAGE_TABLE_LEVEL; set PT_PAGE_SIZE_MASK in gpte then.
  72. + */
  73. + gpte |= level - PT_PAGE_TABLE_LEVEL - 1;
  74. +
  75. return gpte & PT_PAGE_SIZE_MASK;
  76. }
  77. --
  78. 2.14.1