0024-x86-xen-64-Fix-the-reported-SS-and-CS-in-SYSCALL.patch 2.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384
  1. From 2b0794bbebac81a539dfd405273d61a8a16531d2 Mon Sep 17 00:00:00 2001
  2. From: Andy Lutomirski <[email protected]>
  3. Date: Mon, 14 Aug 2017 22:36:19 -0700
  4. Subject: [PATCH 024/242] x86/xen/64: Fix the reported SS and CS in SYSCALL
  5. MIME-Version: 1.0
  6. Content-Type: text/plain; charset=UTF-8
  7. Content-Transfer-Encoding: 8bit
  8. CVE-2017-5754
  9. When I cleaned up the Xen SYSCALL entries, I inadvertently changed
  10. the reported segment registers. Before my patch, regs->ss was
  11. __USER(32)_DS and regs->cs was __USER(32)_CS. After the patch, they
  12. are FLAT_USER_CS/DS(32).
  13. This had a couple unfortunate effects. It confused the
  14. opportunistic fast return logic. It also significantly increased
  15. the risk of triggering a nasty glibc bug:
  16. https://sourceware.org/bugzilla/show_bug.cgi?id=21269
  17. Update the Xen entry code to change it back.
  18. Reported-by: Brian Gerst <[email protected]>
  19. Signed-off-by: Andy Lutomirski <[email protected]>
  20. Cc: Andrew Cooper <[email protected]>
  21. Cc: Boris Ostrovsky <[email protected]>
  22. Cc: Borislav Petkov <[email protected]>
  23. Cc: Juergen Gross <[email protected]>
  24. Cc: Linus Torvalds <[email protected]>
  25. Cc: Peter Zijlstra <[email protected]>
  26. Cc: Thomas Gleixner <[email protected]>
  27. Cc: [email protected]
  28. Fixes: 8a9949bc71a7 ("x86/xen/64: Rearrange the SYSCALL entries")
  29. Link: http://lkml.kernel.org/r/daba8351ea2764bb30272296ab9ce08a81bd8264.1502775273.git.luto@kernel.org
  30. Signed-off-by: Ingo Molnar <[email protected]>
  31. (cherry picked from commit fa2016a8e7d846b306e431646d250500e1da0c33)
  32. Signed-off-by: Andy Whitcroft <[email protected]>
  33. Signed-off-by: Kleber Sacilotto de Souza <[email protected]>
  34. (cherry picked from commit 69a6ef3aeb274efe86fd74771830354f303ccc2f)
  35. Signed-off-by: Fabian Grünbichler <[email protected]>
  36. ---
  37. arch/x86/xen/xen-asm_64.S | 18 ++++++++++++++++++
  38. 1 file changed, 18 insertions(+)
  39. diff --git a/arch/x86/xen/xen-asm_64.S b/arch/x86/xen/xen-asm_64.S
  40. index a8a4f4c460a6..c5fee2680abc 100644
  41. --- a/arch/x86/xen/xen-asm_64.S
  42. +++ b/arch/x86/xen/xen-asm_64.S
  43. @@ -88,6 +88,15 @@ RELOC(xen_sysret64, 1b+1)
  44. ENTRY(xen_syscall_target)
  45. popq %rcx
  46. popq %r11
  47. +
  48. + /*
  49. + * Neither Xen nor the kernel really knows what the old SS and
  50. + * CS were. The kernel expects __USER_DS and __USER_CS, so
  51. + * report those values even though Xen will guess its own values.
  52. + */
  53. + movq $__USER_DS, 4*8(%rsp)
  54. + movq $__USER_CS, 1*8(%rsp)
  55. +
  56. jmp entry_SYSCALL_64_after_hwframe
  57. ENDPROC(xen_syscall_target)
  58. @@ -97,6 +106,15 @@ ENDPROC(xen_syscall_target)
  59. ENTRY(xen_syscall32_target)
  60. popq %rcx
  61. popq %r11
  62. +
  63. + /*
  64. + * Neither Xen nor the kernel really knows what the old SS and
  65. + * CS were. The kernel expects __USER32_DS and __USER32_CS, so
  66. + * report those values even though Xen will guess its own values.
  67. + */
  68. + movq $__USER32_DS, 4*8(%rsp)
  69. + movq $__USER32_CS, 1*8(%rsp)
  70. +
  71. jmp entry_SYSCALL_compat_after_hwframe
  72. ENDPROC(xen_syscall32_target)
  73. --
  74. 2.14.2