commit_msg
stringlengths
1
24.2k
commit_hash
stringlengths
2
84
project
stringlengths
2
40
source
stringclasses
4 values
labels
int64
0
1
repo_url
stringlengths
26
70
commit_url
stringlengths
74
118
commit_date
stringlengths
25
25
NFC: Prevent multiple buffer overflows in NCI Fix multiple remotely-exploitable stack-based buffer overflows due to the NCI code pulling length fields directly from incoming frames and copying too much data into statically-sized arrays. Signed-off-by: Dan Rosenberg <dan.j.rosenberg@gmail.com> Cc: stable@kernel.org Cc: security@kernel.org Cc: Lauro Ramos Venancio <lauro.venancio@openbossa.org> Cc: Aloisio Almeida Jr <aloisio.almeida@openbossa.org> Cc: Samuel Ortiz <sameo@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Acked-by: Ilan Elias <ilane@ti.com> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
67de956ff5dc1d4f321e16cfbd63f5be3b691b43
linux
bigvul
1
null
null
null
macvtap: zerocopy: validate vectors before building skb There're several reasons that the vectors need to be validated: - Return error when caller provides vectors whose num is greater than UIO_MAXIOV. - Linearize part of skb when userspace provides vectors grater than MAX_SKB_FRAGS. - Return error when userspace provides vectors whose total length may exceed - MAX_SKB_FRAGS * PAGE_SIZE. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
b92946e2919134ebe2a4083e4302236295ea2a73
linux
bigvul
1
null
null
null
PKINIT (draft9) null ptr deref [CVE-2012-1016] Don't check for an agility KDF identifier in the non-draft9 reply structure when we're building a draft9 reply, because it'll be NULL. The KDC plugin for PKINIT can dereference a null pointer when handling a draft9 request, leading to a crash of the KDC process. An attacker would need to have a valid PKINIT certificate, or an unauthenticated attacker could execute the attack if anonymous PKINIT is enabled. CVSSv2 vector: AV:N/AC:M/Au:N/C:N/I:N/A:P/E:P/RL:O/RC:C [tlyu@mit.edu: reformat comment and edit log message] (back ported from commit cd5ff932c9d1439c961b0cf9ccff979356686aff) ticket: 7527 (new) version_fixed: 1.10.4 status: resolved
db64ca25d661a47b996b4e2645998b5d7f0eb52c
krb5
bigvul
1
null
null
null
batman-adv: Only write requested number of byte to user buffer Don't write more than the requested number of bytes of an batman-adv icmp packet to the userspace buffer. Otherwise unrelated userspace memory might get overridden by the kernel. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
b5a1eeef04cc7859f34dec9b72ea1b28e4aba07c
linux
bigvul
1
null
null
null
sctp: Fix another socket race during accept/peeloff There is a race between sctp_rcv() and sctp_accept() where we have moved the association from the listening socket to the accepted socket, but sctp_rcv() processing cached the old socket and continues to use it. The easy solution is to check for the socket mismatch once we've grabed the socket lock. If we hit a mis-match, that means that were are currently holding the lock on the listening socket, but the association is refrencing a newly accepted socket. We need to drop the lock on the old socket and grab the lock on the new one. A more proper solution might be to create accepted sockets when the new association is established, similar to TCP. That would eliminate the race for 1-to-1 style sockets, but it would still existing for 1-to-many sockets where a user wished to peeloff an association. For now, we'll live with this easy solution as it addresses the problem. Reported-by: Michal Hocko <mhocko@suse.cz> Reported-by: Karsten Keil <kkeil@suse.de> Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
ae53b5bd77719fed58086c5be60ce4f22bffe1c6
linux
bigvul
1
null
null
null
KVM: Device assignment permission checks (cherry picked from commit 3d27e23b17010c668db311140b17bbbb70c78fb9) Only allow KVM device assignment to attach to devices which: - Are not bridges - Have BAR resources (assume others are special devices) - The user has permissions to use Assigning a bridge is a configuration error, it's not supported, and typically doesn't result in the behavior the user is expecting anyway. Devices without BAR resources are typically chipset components that also don't have host drivers. We don't want users to hold such devices captive or cause system problems by fencing them off into an iommu domain. We determine "permission to use" by testing whether the user has access to the PCI sysfs resource files. By default a normal user will not have access to these files, so it provides a good indication that an administration agent has granted the user access to the device. [Yang Bai: add missing #include] [avi: fix comment style] Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Yang Bai <hamo.by@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
c4e7f9022e506c6635a5037713c37118e23193e4
linux
bigvul
1
null
null
null
GFS2: rewrite fallocate code to write blocks directly GFS2's fallocate code currently goes through the page cache. Since it's only writing to the end of the file or to holes in it, it doesn't need to, and it was causing issues on low memory environments. This patch pulls in some of Steve's block allocation work, and uses it to simply allocate the blocks for the file, and zero them out at allocation time. It provides a slight performance increase, and it dramatically simplifies the code. Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
64dd153c83743af81f20924c6343652d731eeecb
linux
bigvul
1
null
null
null
bridge: reset IPCB in br_parse_ip_options Commit 462fb2af9788a82 (bridge : Sanitize skb before it enters the IP stack), missed one IPCB init before calling ip_options_compile() Thanks to Scot Doyle for his tests and bug reports. Reported-by: Scot Doyle <lkml@scotdoyle.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Cc: Hiroaki SHIMODA <shimoda.hiroaki@gmail.com> Acked-by: Bandan Das <bandan.das@stratus.com> Acked-by: Stephen Hemminger <shemminger@vyatta.com> Cc: Jan Lübbe <jluebbe@debian.org> Signed-off-by: David S. Miller <davem@davemloft.net>
f8e9881c2aef1e982e5abc25c046820cd0b7cf64
linux
bigvul
1
null
null
null
ext4: reimplement convert and split_unwritten Reimplement ext4_ext_convert_to_initialized() and ext4_split_unwritten_extents() using ext4_split_extent() Signed-off-by: Yongqiang Yang <xiaoqiangnk@gmail.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Tested-by: Allison Henderson <achender@linux.vnet.ibm.com>
667eff35a1f56fa74ce98a0c7c29a40adc1ba4e3
linux
bigvul
1
null
null
null
AppArmor: fix oops in apparmor_setprocattr When invalid parameters are passed to apparmor_setprocattr a NULL deref oops occurs when it tries to record an audit message. This is because it is passing NULL for the profile parameter for aa_audit. But aa_audit now requires that the profile passed is not NULL. Fix this by passing the current profile on the task that is trying to setprocattr. Signed-off-by: Kees Cook <kees@ubuntu.com> Signed-off-by: John Johansen <john.johansen@canonical.com> Cc: stable@kernel.org Signed-off-by: James Morris <jmorris@namei.org>
a5b2c5b2ad5853591a6cac6134cd0f599a720865
linux
bigvul
1
null
null
null
perf tools: do not look at ./config for configuration In addition to /etc/perfconfig and $HOME/.perfconfig, perf looks for configuration in the file ./config, imitating git which looks at $GIT_DIR/config. If ./config is not a perf configuration file, it fails, or worse, treats it as a configuration file and changes behavior in some unexpected way. "config" is not an unusual name for a file to be lying around and perf does not have a private directory dedicated for its own use, so let's just stop looking for configuration in the cwd. Callers needing context-sensitive configuration can use the PERF_CONFIG environment variable. Requested-by: Christian Ohm <chr.ohm@gmx.net> Cc: 632923@bugs.debian.org Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Christian Ohm <chr.ohm@gmx.net> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110805165838.GA7237@elie.gateway.2wire.net Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
aba8d056078e47350d85b06a9cabd5afcc4b72ea
linux
bigvul
1
null
null
null
NLM: Don't hang forever on NLM unlock requests If the NLM daemon is killed on the NFS server, we can currently end up hanging forever on an 'unlock' request, instead of aborting. Basically, if the rpcbind request fails, or the server keeps returning garbage, we really want to quit instead of retrying. Tested-by: Vasily Averin <vvs@sw.ru> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org
0b760113a3a155269a3fba93a409c640031dd68f
linux
bigvul
1
null
null
null
[SCTP]: Fix assertion (!atomic_read(&sk->sk_rmem_alloc)) failed message In current implementation, LKSCTP does receive buffer accounting for data in sctp_receive_queue and pd_lobby. However, LKSCTP don't do accounting for data in frag_list when data is fragmented. In addition, LKSCTP doesn't do accounting for data in reasm and lobby queue in structure sctp_ulpq. When there are date in these queue, assertion failed message is printed in inet_sock_destruct because sk_rmem_alloc of oldsk does not become 0 when socket is destroyed. Signed-off-by: Tsutomu Fujii <t-fujii@nb.jp.nec.com> Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
ea2bc483ff5caada7c4aa0d5fbf87d3a6590273d
linux
bigvul
1
null
null
null
mm: thp: fix /dev/zero MAP_PRIVATE and vm_flags cleanups The huge_memory.c THP page fault was allowed to run if vm_ops was null (which would succeed for /dev/zero MAP_PRIVATE, as the f_op->mmap wouldn't setup a special vma->vm_ops and it would fallback to regular anonymous memory) but other THP logics weren't fully activated for vmas with vm_file not NULL (/dev/zero has a not NULL vma->vm_file). So this removes the vm_file checks so that /dev/zero also can safely use THP (the other albeit safer approach to fix this bug would have been to prevent the THP initial page fault to run if vm_file was set). After removing the vm_file checks, this also makes huge_memory.c stricter in khugepaged for the DEBUG_VM=y case. It doesn't replace the vm_file check with a is_pfn_mapping check (but it keeps checking for VM_PFNMAP under VM_BUG_ON) because for a is_cow_mapping() mapping VM_PFNMAP should only be allowed to exist before the first page fault, and in turn when vma->anon_vma is null (so preventing khugepaged registration). So I tend to think the previous comment saying if vm_file was set, VM_PFNMAP might have been set and we could still be registered in khugepaged (despite anon_vma was not NULL to be registered in khugepaged) was too paranoid. The is_linear_pfn_mapping check is also I think superfluous (as described by comment) but under DEBUG_VM it is safe to stay. Addresses https://bugzilla.kernel.org/show_bug.cgi?id=33682 Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Reported-by: Caspar Zhang <bugs@casparzhang.com> Acked-by: Mel Gorman <mel@csn.ul.ie> Acked-by: Rik van Riel <riel@redhat.com> Cc: <stable@kernel.org> [2.6.38.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
78f11a255749d09025f54d4e2df4fbcb031530e2
linux
bigvul
1
null
null
null
cifs: clean up cifs_find_smb_ses (try #2) This patch replaces the earlier patch by the same name. The only difference is that MAX_PASSWORD_SIZE has been increased to attempt to match the limits that windows enforces. Do a better job of matching sessions by authtype. Matching by username for a Kerberos session is incorrect, and anonymous sessions need special handling. Also, in the case where we do match by username, we also need to match by password. That ensures that someone else doesn't "borrow" an existing session without needing to know the password. Finally, passwords can be longer than 16 bytes. Bump MAX_PASSWORD_SIZE to 512 to match the size that the userspace mount helper allows. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
4ff67b720c02c36e54d55b88c2931879b7db1cd2
linux
bigvul
1
null
null
null
Prevent rt_sigqueueinfo and rt_tgsigqueueinfo from spoofing the signal code Userland should be able to trust the pid and uid of the sender of a signal if the si_code is SI_TKILL. Unfortunately, the kernel has historically allowed sigqueueinfo() to send any si_code at all (as long as it was negative - to distinguish it from kernel-generated signals like SIGILL etc), so it could spoof a SI_TKILL with incorrect siginfo values. Happily, it looks like glibc has always set si_code to the appropriate SI_QUEUE, so there are probably no actual user code that ever uses anything but the appropriate SI_QUEUE flag. So just tighten the check for si_code (we used to allow any negative value), and add a (one-time) warning in case there are binaries out there that might depend on using other si_code values. Signed-off-by: Julien Tinnes <jln@google.com> Acked-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
da48524eb20662618854bb3df2db01fc65f3070c
linux
bigvul
1
null
null
null
irda: validate peer name and attribute lengths Length fields provided by a peer for names and attributes may be longer than the destination array sizes. Validate lengths to prevent stack buffer overflows. Signed-off-by: Dan Rosenberg <drosenberg@vsecurity.com> Cc: stable@kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
d370af0ef7951188daeb15bae75db7ba57c67846
linux
bigvul
1
null
null
null
net: don't allow CAP_NET_ADMIN to load non-netdev kernel modules Since a8f80e8ff94ecba629542d9b4b5f5a8ee3eb565c any process with CAP_NET_ADMIN may load any module from /lib/modules/. This doesn't mean that CAP_NET_ADMIN is a superset of CAP_SYS_MODULE as modules are limited to /lib/modules/**. However, CAP_NET_ADMIN capability shouldn't allow anybody load any module not related to networking. This patch restricts an ability of autoloading modules to netdev modules with explicit aliases. This fixes CVE-2011-1019. Arnd Bergmann suggested to leave untouched the old pre-v2.6.32 behavior of loading netdev modules by name (without any prefix) for processes with CAP_SYS_MODULE to maintain the compatibility with network scripts that use autoloading netdev modules by aliases like "eth0", "wlan0". Currently there are only three users of the feature in the upstream kernel: ipip, ip_gre and sit. root@albatros:~# capsh --drop=$(seq -s, 0 11),$(seq -s, 13 34) -- root@albatros:~# grep Cap /proc/$$/status CapInh: 0000000000000000 CapPrm: fffffff800001000 CapEff: fffffff800001000 CapBnd: fffffff800001000 root@albatros:~# modprobe xfs FATAL: Error inserting xfs (/lib/modules/2.6.38-rc6-00001-g2bf4ca3/kernel/fs/xfs/xfs.ko): Operation not permitted root@albatros:~# lsmod | grep xfs root@albatros:~# ifconfig xfs xfs: error fetching interface information: Device not found root@albatros:~# lsmod | grep xfs root@albatros:~# lsmod | grep sit root@albatros:~# ifconfig sit sit: error fetching interface information: Device not found root@albatros:~# lsmod | grep sit root@albatros:~# ifconfig sit0 sit0 Link encap:IPv6-in-IPv4 NOARP MTU:1480 Metric:1 root@albatros:~# lsmod | grep sit sit 10457 0 tunnel4 2957 1 sit For CAP_SYS_MODULE module loading is still relaxed: root@albatros:~# grep Cap /proc/$$/status CapInh: 0000000000000000 CapPrm: ffffffffffffffff CapEff: ffffffffffffffff CapBnd: ffffffffffffffff root@albatros:~# ifconfig xfs xfs: error fetching interface information: Device not found root@albatros:~# lsmod | grep xfs xfs 745319 0 Reference: https://lkml.org/lkml/2011/2/24/203 Signed-off-by: Vasiliy Kulikov <segoon@openwall.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Kees Cook <kees.cook@canonical.com> Signed-off-by: James Morris <jmorris@namei.org>
8909c9ad8ff03611c9c96c9a92656213e4bb495b
linux
bigvul
1
null
null
null
Fix kpasswd UDP ping-pong [CVE-2002-2443] The kpasswd service provided by kadmind was vulnerable to a UDP "ping-pong" attack [CVE-2002-2443]. Don't respond to packets unless they pass some basic validation, and don't respond to our own error packets. Some authors use CVE-1999-0103 to refer to the kpasswd UDP ping-pong attack or UDP ping-pong attacks in general, but there is discussion leading toward narrowing the definition of CVE-1999-0103 to the echo, chargen, or other similar built-in inetd services. Thanks to Vincent Danen for alerting us to this issue. CVSSv2: AV:N/AC:L/Au:N/C:N/I:N/A:P/E:P/RL:O/RC:C ticket: 7637 (new) target_version: 1.11.3 tags: pullup
cf1a0c411b2668c57c41e9c4efd15ba17b6b322c
krb5
bigvul
1
null
null
null
isofs: Fix infinite looping over CE entries Rock Ridge extensions define so called Continuation Entries (CE) which define where is further space with Rock Ridge data. Corrupted isofs image can contain arbitrarily long chain of these, including a one containing loop and thus causing kernel to end in an infinite loop when traversing these entries. Limit the traversal to 32 entries which should be more than enough space to store all the Rock Ridge data. Reported-by: P J P <ppandit@redhat.com> CC: stable@vger.kernel.org Signed-off-by: Jan Kara <jack@suse.cz>
f54e18f1b831c92f6512d2eedb224cd63d607d3d
linux
bigvul
1
null
null
null
x86_64, switch_to(): Load TLS descriptors before switching DS and ES Otherwise, if buggy user code points DS or ES into the TLS array, they would be corrupted after a context switch. This also significantly improves the comments and documents some gotchas in the code. Before this patch, the both tests below failed. With this patch, the es test passes, although the gsbase test still fails. ----- begin es test ----- /* * Copyright (c) 2014 Andy Lutomirski * GPL v2 */ static unsigned short GDT3(int idx) { return (idx << 3) | 3; } static int create_tls(int idx, unsigned int base) { struct user_desc desc = { .entry_number = idx, .base_addr = base, .limit = 0xfffff, .seg_32bit = 1, .contents = 0, /* Data, grow-up */ .read_exec_only = 0, .limit_in_pages = 1, .seg_not_present = 0, .useable = 0, }; if (syscall(SYS_set_thread_area, &desc) != 0) err(1, "set_thread_area"); return desc.entry_number; } int main() { int idx = create_tls(-1, 0); printf("Allocated GDT index %d\n", idx); unsigned short orig_es; asm volatile ("mov %%es,%0" : "=rm" (orig_es)); int errors = 0; int total = 1000; for (int i = 0; i < total; i++) { asm volatile ("mov %0,%%es" : : "rm" (GDT3(idx))); usleep(100); unsigned short es; asm volatile ("mov %%es,%0" : "=rm" (es)); asm volatile ("mov %0,%%es" : : "rm" (orig_es)); if (es != GDT3(idx)) { if (errors == 0) printf("[FAIL]\tES changed from 0x%hx to 0x%hx\n", GDT3(idx), es); errors++; } } if (errors) { printf("[FAIL]\tES was corrupted %d/%d times\n", errors, total); return 1; } else { printf("[OK]\tES was preserved\n"); return 0; } } ----- end es test ----- ----- begin gsbase test ----- /* * gsbase.c, a gsbase test * Copyright (c) 2014 Andy Lutomirski * GPL v2 */ static unsigned char *testptr, *testptr2; static unsigned char read_gs_testvals(void) { unsigned char ret; asm volatile ("movb %%gs:%1, %0" : "=r" (ret) : "m" (*testptr)); return ret; } int main() { int errors = 0; testptr = mmap((void *)0x200000000UL, 1, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_FIXED | MAP_ANONYMOUS, -1, 0); if (testptr == MAP_FAILED) err(1, "mmap"); testptr2 = mmap((void *)0x300000000UL, 1, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_FIXED | MAP_ANONYMOUS, -1, 0); if (testptr2 == MAP_FAILED) err(1, "mmap"); *testptr = 0; *testptr2 = 1; if (syscall(SYS_arch_prctl, ARCH_SET_GS, (unsigned long)testptr2 - (unsigned long)testptr) != 0) err(1, "ARCH_SET_GS"); usleep(100); if (read_gs_testvals() == 1) { printf("[OK]\tARCH_SET_GS worked\n"); } else { printf("[FAIL]\tARCH_SET_GS failed\n"); errors++; } asm volatile ("mov %0,%%gs" : : "r" (0)); if (read_gs_testvals() == 0) { printf("[OK]\tWriting 0 to gs worked\n"); } else { printf("[FAIL]\tWriting 0 to gs failed\n"); errors++; } usleep(100); if (read_gs_testvals() == 0) { printf("[OK]\tgsbase is still zero\n"); } else { printf("[FAIL]\tgsbase was corrupted\n"); errors++; } return errors == 0 ? 0 : 1; } ----- end gsbase test ----- Signed-off-by: Andy Lutomirski <luto@amacapital.net> Cc: <stable@vger.kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/509d27c9fec78217691c3dad91cec87e1006b34a.1418075657.git.luto@amacapital.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
f647d7c155f069c1a068030255c300663516420e
linux
bigvul
1
null
null
null
fixed a server crash
a766cb44bcffcdb0b88e776d01c5ee1323d44f85
teeworlds
bigvul
1
null
null
null
x86_64, traps: Stop using IST for #SS On a 32-bit kernel, this has no effect, since there are no IST stacks. On a 64-bit kernel, #SS can only happen in user code, on a failed iret to user space, a canonical violation on access via RSP or RBP, or a genuine stack segment violation in 32-bit kernel code. The first two cases don't need IST, and the latter two cases are unlikely fatal bugs, and promoting them to double faults would be fine. This fixes a bug in which the espfix64 code mishandles a stack segment violation. This saves 4k of memory per CPU and a tiny bit of code. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
6f442be2fb22be02cafa606f1769fa1e6f894441
linux
bigvul
1
null
null
null
Fix format string vulnerability in using agerr() to report errors during parsing. We now use a fixed format %s, and pass the error string as an argument.
99eda421f7ddc27b14e4ac1d2126e5fe41719081
graphviz
bigvul
1
null
null
null
Do bounds checking when unescaping PPP. Clean up a const issue while we're at it.
0f95d441e4b5d7512cc5c326c8668a120e048eda
tcpdump
bigvul
1
null
null
null
Merge fix from security/bb11155 branch
fc3794a54d2affe5770c1f876484a871c783e91e
clamav-devel
bigvul
1
null
null
null
[media] ttusb-dec: buffer overflow in ioctl We need to add a limit check here so we don't overflow the buffer. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
f2e323ec96077642d397bb1c355def536d489d16
linux
bigvul
1
null
null
null
mac80211: fix fragmentation code, particularly for encryption The "new" fragmentation code (since my rewrite almost 5 years ago) erroneously sets skb->len rather than using skb_trim() to adjust the length of the first fragment after copying out all the others. This leaves the skb tail pointer pointing to after where the data originally ended, and thus causes the encryption MIC to be written at that point, rather than where it belongs: immediately after the data. The impact of this is that if software encryption is done, then a) encryption doesn't work for the first fragment, the connection becomes unusable as the first fragment will never be properly verified at the receiver, the MIC is practically guaranteed to be wrong b) we leak up to 8 bytes of plaintext (!) of the packet out into the air This is only mitigated by the fact that many devices are capable of doing encryption in hardware, in which case this can't happen as the tail pointer is irrelevant in that case. Additionally, fragmentation is not used very frequently and would normally have to be configured manually. Fix this by using skb_trim() properly. Cc: stable@vger.kernel.org Fixes: 2de8e0d999b8 ("mac80211: rewrite fragmentation") Reported-by: Jouni Malinen <j@w1.fi> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
338f977f4eb441e69bb9a46eaa0ac715c931a67f
linux
bigvul
1
null
null
null
Add NEWS-file for 0.9.0.
0f5b4fd860fa7e3a6c47201637aab05395f32647
mod_auth_mellon
bigvul
1
null
null
null
update version of lazy_bdecode from libtorrent
bbc0b7191e3f48461ca6e5b1b34bdf4b3f1e79a9
bootstrap-dht
bigvul
1
null
null
null
Check for invalid input in encrypted buffers The ECB Blowfish decryption function assumed that encrypted input would always come in blocks of 12 characters, as specified. However, buggy clients or annoying people may not adhere to that assumption, causing the core to crash while trying to process the invalid base64 input. With this commit we make sure that we're not overstepping the bounds of the input string while decoding it; instead we bail out early and display the original input. Fixes #1314. Thanks to Tucos for finding that one!
8b5ecd226f9208af3074b33d3b7cf5e14f55b138
quassel
bigvul
1
null
null
null
KVM: emulate: avoid accessing NULL ctxt->memopp A failure to decode the instruction can cause a NULL pointer access. This is fixed simply by moving the "done" label as close as possible to the return. This fixes CVE-2014-8481. Reported-by: Andy Lutomirski <luto@amacapital.net> Cc: stable@vger.kernel.org Fixes: 41061cdb98a0bec464278b4db8e894a3121671f5 Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
a430c9166312e1aa3d80bce32374233bdbfeba32
linux
bigvul
1
null
null
null
KVM: x86: PREFETCH and HINT_NOP should have SrcMem flag The decode phase of the x86 emulator assumes that every instruction with the ModRM flag, and which can be used with RIP-relative addressing, has either SrcMem or DstMem. This is not the case for several instructions - prefetch, hint-nop and clflush. Adding SrcMem|NoAccess for prefetch and hint-nop and SrcMem for clflush. This fixes CVE-2014-8480. Fixes: 41061cdb98a0bec464278b4db8e894a3121671f5 Cc: stable@vger.kernel.org Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3f6f1480d86bf9fc16c160d803ab1d006e3058d5
linux
bigvul
1
null
null
null
security policy: remove clause for Abandon call I committed this after copying it out of a work-in-progress branch that was being worked on by Serge. I didn't realise that it was most likely only ever intended to be for debugging purposes. According to Martin and Stéphane this is not needed and is a potential security problem. Let's close that hole.
d2e91c118f6128875274a638007702d1cc665893
systemd-shim
bigvul
1
null
null
null
kvm: fix excessive pages un-pinning in kvm_iommu_map error path. The third parameter of kvm_unpin_pages() when called from kvm_iommu_map_pages() is wrong, it should be the number of pages to un-pin and not the page size. This error was facilitated with an inconsistent API: kvm_pin_pages() takes a size, but kvn_unpin_pages() takes a number of pages, so fix the problem by matching the two. This was introduced by commit 350b8bd ("kvm: iommu: fix the third parameter of kvm_iommu_put_pages (CVE-2014-3601)"), which fixes the lack of un-pinning for pages intended to be un-pinned (i.e. memory leak) but unfortunately potentially aggravated the number of pages we un-pin that should have stayed pinned. As far as I understand though, the same practical mitigations apply. This issue was found during review of Red Hat 6.6 patches to prepare Ksplice rebootless updates. Thanks to Vegard for his time on a late Friday evening to help me in understanding this code. Fixes: 350b8bd ("kvm: iommu: fix the third parameter of... (CVE-2014-3601)") Cc: stable@vger.kernel.org Signed-off-by: Quentin Casasnovas <quentin.casasnovas@oracle.com> Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com> Signed-off-by: Jamie Iles <jamie.iles@oracle.com> Reviewed-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3d32e4dbe71374a6780eaf51d719d76f9a9bf22f
linux
bigvul
1
null
null
null
x86/tls: Validate TLS entries to protect espfix Installing a 16-bit RW data segment into the GDT defeats espfix. AFAICT this will not affect glibc, Wine, or dosemu at all. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Acked-by: H. Peter Anvin <hpa@zytor.com> Cc: stable@vger.kernel.org Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: security@kernel.org <security@kernel.org> Cc: Willy Tarreau <w@1wt.eu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
41bdc78544b8a93a9c6814b8bbbfef966272abbe
linux
bigvul
1
null
null
null
- reduce recursion level from 20 to 10 and make a symbolic constant for it. - pull out the guts of saving and restoring the output buffer into functions and take care not to overwrite the error message if an error happened.
6f737ddfadb596d7d4a993f7ed2141ffd664a81c
file
bigvul
1
null
null
null
Stop reporting bad capabilities after the first few.
d7cdad007c507e6c79f51f058dd77fab70ceb9f6
file
bigvul
1
null
null
null
Merge r1642499 from trunk: *) SECURITY: CVE-2014-8109 (cve.mitre.org) mod_lua: Fix handling of the Require line when a LuaAuthzProvider is used in multiple Require directives with different arguments. PR57204 [Edward Lu <Chaosed0 gmail.com>] Submitted By: Edward Lu Committed By: covener Submitted by: covener Reviewed/backported by: jim git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/branches/2.4.x@1642861 13f79535-47bb-0310-9956-ffa450edef68
3f1693d558d0758f829c8b53993f1749ddf6ffcb
httpd
bigvul
1
null
null
null
arm64: __clear_user: handle exceptions on strb ARM64 currently doesn't fix up faults on the single-byte (strb) case of __clear_user... which means that we can cause a nasty kernel panic as an ordinary user with any multiple PAGE_SIZE+1 read from /dev/zero. i.e.: dd if=/dev/zero of=foo ibs=1 count=1 (or ibs=65537, etc.) This is a pretty obscure bug in the general case since we'll only __do_kernel_fault (since there's no extable entry for pc) if the mmap_sem is contended. However, with CONFIG_DEBUG_VM enabled, we'll always fault. if (!down_read_trylock(&mm->mmap_sem)) { if (!user_mode(regs) && !search_exception_tables(regs->pc)) goto no_context; retry: down_read(&mm->mmap_sem); } else { /* * The above down_read_trylock() might have succeeded in * which * case, we'll have missed the might_sleep() from * down_read(). */ might_sleep(); if (!user_mode(regs) && !search_exception_tables(regs->pc)) goto no_context; } Fix that by adding an extable entry for the strb instruction, since it touches user memory, similar to the other stores in __clear_user. Signed-off-by: Kyle McMartin <kyle@redhat.com> Reported-by: Miloš Prchlík <mprchlik@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
97fc15436b36ee3956efad83e22a557991f7d19d
linux
bigvul
1
null
null
null
KVM: x86: Don't report guest userspace emulation error to userspace Commit fc3a9157d314 ("KVM: X86: Don't report L2 emulation failures to user-space") disabled the reporting of L2 (nested guest) emulation failures to userspace due to race-condition between a vmexit and the instruction emulator. The same rational applies also to userspace applications that are permitted by the guest OS to access MMIO area or perform PIO. This patch extends the current behavior - of injecting a #UD instead of reporting it to userspace - also for guest userspace code. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
a2b9e6c1a35afcc0973acb72e591c714e78885ff
linux
bigvul
1
null
null
null
net: sctp: fix NULL pointer dereference in af->from_addr_param on malformed packet An SCTP server doing ASCONF will panic on malformed INIT ping-of-death in the form of: ------------ INIT[PARAM: SET_PRIMARY_IP] ------------> While the INIT chunk parameter verification dissects through many things in order to detect malformed input, it misses to actually check parameters inside of parameters. E.g. RFC5061, section 4.2.4 proposes a 'set primary IP address' parameter in ASCONF, which has as a subparameter an address parameter. So an attacker may send a parameter type other than SCTP_PARAM_IPV4_ADDRESS or SCTP_PARAM_IPV6_ADDRESS, param_type2af() will subsequently return 0 and thus sctp_get_af_specific() returns NULL, too, which we then happily dereference unconditionally through af->from_addr_param(). The trace for the log: BUG: unable to handle kernel NULL pointer dereference at 0000000000000078 IP: [<ffffffffa01e9c62>] sctp_process_init+0x492/0x990 [sctp] PGD 0 Oops: 0000 [#1] SMP [...] Pid: 0, comm: swapper Not tainted 2.6.32-504.el6.x86_64 #1 Bochs Bochs RIP: 0010:[<ffffffffa01e9c62>] [<ffffffffa01e9c62>] sctp_process_init+0x492/0x990 [sctp] [...] Call Trace: <IRQ> [<ffffffffa01f2add>] ? sctp_bind_addr_copy+0x5d/0xe0 [sctp] [<ffffffffa01e1fcb>] sctp_sf_do_5_1B_init+0x21b/0x340 [sctp] [<ffffffffa01e3751>] sctp_do_sm+0x71/0x1210 [sctp] [<ffffffffa01e5c09>] ? sctp_endpoint_lookup_assoc+0xc9/0xf0 [sctp] [<ffffffffa01e61f6>] sctp_endpoint_bh_rcv+0x116/0x230 [sctp] [<ffffffffa01ee986>] sctp_inq_push+0x56/0x80 [sctp] [<ffffffffa01fcc42>] sctp_rcv+0x982/0xa10 [sctp] [<ffffffffa01d5123>] ? ipt_local_in_hook+0x23/0x28 [iptable_filter] [<ffffffff8148bdc9>] ? nf_iterate+0x69/0xb0 [<ffffffff81496d10>] ? ip_local_deliver_finish+0x0/0x2d0 [<ffffffff8148bf86>] ? nf_hook_slow+0x76/0x120 [<ffffffff81496d10>] ? ip_local_deliver_finish+0x0/0x2d0 [...] A minimal way to address this is to check for NULL as we do on all other such occasions where we know sctp_get_af_specific() could possibly return with NULL. Fixes: d6de3097592b ("[SCTP]: Add the handling of "Set Primary IP Address" parameter to INIT") Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Cc: Vlad Yasevich <vyasevich@gmail.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
e40607cbe270a9e8360907cb1e62ddf0736e4864
linux
bigvul
1
null
null
null
tracing/syscalls: Ignore numbers outside NR_syscalls' range ARM has some private syscalls (for example, set_tls(2)) which lie outside the range of NR_syscalls. If any of these are called while syscall tracing is being performed, out-of-bounds array access will occur in the ftrace and perf sys_{enter,exit} handlers. # trace-cmd record -e raw_syscalls:* true && trace-cmd report ... true-653 [000] 384.675777: sys_enter: NR 192 (0, 1000, 3, 4000022, ffffffff, 0) true-653 [000] 384.675812: sys_exit: NR 192 = 1995915264 true-653 [000] 384.675971: sys_enter: NR 983045 (76f74480, 76f74000, 76f74b28, 76f74480, 76f76f74, 1) true-653 [000] 384.675988: sys_exit: NR 983045 = 0 ... # trace-cmd record -e syscalls:* true [ 17.289329] Unable to handle kernel paging request at virtual address aaaaaace [ 17.289590] pgd = 9e71c000 [ 17.289696] [aaaaaace] *pgd=00000000 [ 17.289985] Internal error: Oops: 5 [#1] PREEMPT SMP ARM [ 17.290169] Modules linked in: [ 17.290391] CPU: 0 PID: 704 Comm: true Not tainted 3.18.0-rc2+ #21 [ 17.290585] task: 9f4dab00 ti: 9e710000 task.ti: 9e710000 [ 17.290747] PC is at ftrace_syscall_enter+0x48/0x1f8 [ 17.290866] LR is at syscall_trace_enter+0x124/0x184 Fix this by ignoring out-of-NR_syscalls-bounds syscall numbers. Commit cd0980fc8add "tracing: Check invalid syscall nr while tracing syscalls" added the check for less than zero, but it should have also checked for greater than NR_syscalls. Link: http://lkml.kernel.org/p/1414620418-29472-1-git-send-email-rabin@rab.in Fixes: cd0980fc8add "tracing: Check invalid syscall nr while tracing syscalls" Cc: stable@vger.kernel.org # 2.6.33+ Signed-off-by: Rabin Vincent <rabin@rab.in> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
086ba77a6db00ed858ff07451bedee197df868c9
linux
bigvul
1
null
null
null
net: avoid dependency of net_get_random_once on nop patching net_get_random_once depends on the static keys infrastructure to patch up the branch to the slow path during boot. This was realized by abusing the static keys api and defining a new initializer to not enable the call site while still indicating that the branch point should get patched up. This was needed to have the fast path considered likely by gcc. The static key initialization during boot up normally walks through all the registered keys and either patches in ideal nops or enables the jump site but omitted that step on x86 if ideal nops where already placed at static_key branch points. Thus net_get_random_once branches not always became active. This patch switches net_get_random_once to the ordinary static_key api and thus places the kernel fast path in the - by gcc considered - unlikely path. Microbenchmarks on Intel and AMD x86-64 showed that the unlikely path actually beats the likely path in terms of cycle cost and that different nop patterns did not make much difference, thus this switch should not be noticeable. Fixes: a48e42920ff38b ("net: introduce new macro net_get_random_once") Reported-by: Tuomas Räsänen <tuomasjjrasanen@tjjr.fi> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
3d4405226d27b3a215e4d03cfa51f536244e5de7
linux
bigvul
1
null
null
null
xfs: fix directory hash ordering bug Commit f5ea1100 ("xfs: add CRCs to dir2/da node blocks") introduced in 3.10 incorrectly converted the btree hash index array pointer in xfs_da3_fixhashpath(). It resulted in the the current hash always being compared against the first entry in the btree rather than the current block index into the btree block's hash entry array. As a result, it was comparing the wrong hashes, and so could misorder the entries in the btree. For most cases, this doesn't cause any problems as it requires hash collisions to expose the ordering problem. However, when there are hash collisions within a directory there is a very good probability that the entries will be ordered incorrectly and that actually matters when duplicate hashes are placed into or removed from the btree block hash entry array. This bug results in an on-disk directory corruption and that results in directory verifier functions throwing corruption warnings into the logs. While no data or directory entries are lost, access to them may be compromised, and attempts to remove entries from a directory that has suffered from this corruption may result in a filesystem shutdown. xfs_repair will fix the directory hash ordering without data loss occuring. [dchinner: wrote useful a commit message] cc: <stable@vger.kernel.org> Reported-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: Mark Tinguely <tinguely@sgi.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
c88547a8119e3b581318ab65e9b72f27f23e641d
linux
bigvul
1
null
null
null
[CIFS] Possible null ptr deref in SMB2_tcon As Raphael Geissert pointed out, tcon_error_exit can dereference tcon and there is one path in which tcon can be null. Signed-off-by: Steve French <smfrench@gmail.com> CC: Stable <stable@vger.kernel.org> # v3.7+ Reported-by: Raphael Geissert <geissert@debian.org>
18f39e7be0121317550d03e267e3ebd4dbfbb3ce
linux
bigvul
1
null
null
null
libceph: do not hard code max auth ticket len We hard code cephx auth ticket buffer size to 256 bytes. This isn't enough for any moderate setups and, in case tickets themselves are not encrypted, leads to buffer overflows (ceph_x_decrypt() errors out, but ceph_decode_copy() doesn't - it's just a memcpy() wrapper). Since the buffer is allocated dynamically anyway, allocated it a bit later, at the point where we know how much is going to be needed. Fixes: http://tracker.ceph.com/issues/8979 Cc: stable@vger.kernel.org Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com> Reviewed-by: Sage Weil <sage@redhat.com>
c27a3e4d667fdcad3db7b104f75659478e0c68d8
linux
bigvul
1
null
null
null
udf: Avoid infinite loop when processing indirect ICBs We did not implement any bound on number of indirect ICBs we follow when loading inode. Thus corrupted medium could cause kernel to go into an infinite loop, possibly causing a stack overflow. Fix the possible stack overflow by removing recursion from __udf_read_inode() and limit number of indirect ICBs we follow to avoid infinite loops. Signed-off-by: Jan Kara <jack@suse.cz>
c03aa9f6e1f938618e6db2e23afef0574efeeb65
linux
bigvul
1
null
null
null
Fixed heap overflow caused by length
e3abe7d7585ecc420a7cab73313216613aadad5a
ettercap
bigvul
1
null
null
null
Fix potential security leak in HashContext Summary: CVE-2014-6229 This is not a NUL-terminated string, it's a fixed-size block of data. The risks were key truncation (if there happens to be a NUL byte in the key) or over-reading (which would be information leakage). Reviewed By: @ptarjan Differential Revision: D1533546
7135ec229882370a00411aa50030eada6034cc1b
hhvm
bigvul
1
null
null
null
Fix integer overflow in chunk_split Reviewed By: @ptarjan Differential Revision: D1515947
1f91e076a585118495b976a413c1df40f6fd3d41
hhvm
bigvul
1
null
null
null
isofs: Fix unbounded recursion when processing relocated directories We did not check relocated directory in any way when processing Rock Ridge 'CL' tag. Thus a corrupted isofs image can possibly have a CL entry pointing to another CL entry leading to possibly unbounded recursion in kernel code and thus stack overflow or deadlocks (if there is a loop created from CL entries). Fix the problem by not allowing CL entry to point to a directory entry with CL entry (such use makes no good sense anyway) and by checking whether CL entry doesn't point to itself. CC: stable@vger.kernel.org Reported-by: Chris Evans <cevans@google.com> Signed-off-by: Jan Kara <jack@suse.cz>
410dd3cf4c9b36f27ed4542ee18b1af5e68645a4
linux
bigvul
1
null
null
null
Fix mcrypt_create_iv(..., MCRYPT_RAND) to auto-seed RNG Summary: Without seeding the random number generator, we'll always get the same IV, and that reduces the security of this function. Fortunately, f_rand() has all of that logic for auto-seeding and selection of a suitable initial seed built-in. Realistically, using MCRYPT_RAND should be deprecated. I'm going to wait on PHP Internals to make a decision on https://wiki.php.net/rfc/deprecate_mcrypt_rand before adding that warning however, so that our test suite remains consistent. Credit: Theodore R. Smith of PHP Experts, Inc. <theodorephpexperts.pro> Closes #3496 Reviewed By: @ptarjan Differential Revision: D1502435
ab6fdeb84fb090b48606b6f7933028cfe7bf3a5e
hhvm
bigvul
1
null
null
null
Support keyless principals in LDAP [CVE-2014-5354] Operations like "kadmin -q 'addprinc -nokey foo'" or "kadmin -q 'purgekeys -all foo'" result in principal entries with no keys present, so krb5_encode_krbsecretkey() would just return NULL, which then got unconditionally dereferenced in krb5_add_ber_mem_ldap_mod(). Apply some fixes to krb5_encode_krbsecretkey() to handle zero-key principals better, correct the test for an allocation failure, and slightly restructure the cleanup handler to be shorter and more appropriate for the usage. Once it no longer short-circuits when n_key_data is zero, it will produce an array of length two with both entries NULL, which is treated as an empty list by the LDAP library, the correct behavior for a keyless principal. However, attributes with empty values are only handled by the LDAP library for Modify operations, not Add operations (which only get a sequence of Attribute, with no operation field). Therefore, only add an empty krbprincipalkey to the modlist when we will be performing a Modify, and not when we will be performing an Add, which is conditional on the (misspelled) create_standalone_prinicipal boolean. CVE-2014-5354: In MIT krb5, when kadmind is configured to use LDAP for the KDC database, an authenticated remote attacker can cause a NULL dereference by inserting into the database a principal entry which contains no long-term keys. In order for the LDAP KDC backend to translate a principal entry from the database abstraction layer into the form expected by the LDAP schema, the principal's keys are encoded into a NULL-terminated array of length-value entries to be stored in the LDAP database. However, the subroutine which produced this array did not correctly handle the case where no keys were present, returning NULL instead of an empty array, and the array was unconditionally dereferenced while adding to the list of LDAP operations to perform. Versions of MIT krb5 prior to 1.12 did not expose a way for principal entries to have no long-term key material, and therefore are not vulnerable. CVSSv2 Vector: AV:N/AC:M/Au:S/C:N/I:N/A:P/E:H/RL:OF/RC:C ticket: 8041 (new) tags: pullup target_version: 1.13.1 subject: kadmind with ldap backend crashes when putting keyless entries
04038bf3633c4b909b5ded3072dc88c8c419bf16
krb5
bigvul
1
null
null
null
Fix LDAP misused policy name crash [CVE-2014-5353] In krb5_ldap_get_password_policy_from_dn, if LDAP_SEARCH returns successfully with no results, return KRB5_KDB_NOENTRY instead of returning success with a zeroed-out policy object. This fixes a null dereference when an admin attempts to use an LDAP ticket policy name as a password policy name. CVE-2014-5353: In MIT krb5, when kadmind is configured to use LDAP for the KDC database, an authenticated remote attacker can cause a NULL dereference by attempting to use a named ticket policy object as a password policy for a principal. The attacker needs to be authenticated as a user who has the elevated privilege for setting password policy by adding or modifying principals. Queries to LDAP scoped to the krbPwdPolicy object class will correctly not return entries of other classes, such as ticket policy objects, but may return success with no returned elements if an object with the requested DN exists in a different object class. In this case, the routine to retrieve a password policy returned success with a password policy object that consisted entirely of zeroed memory. In particular, accesses to the policy name will dereference a NULL pointer. KDC operation does not access the policy name field, but most kadmin operations involving the principal with incorrect password policy will trigger the crash. Thanks to Patrik Kis for reporting this problem. CVSSv2 Vector: AV:N/AC:M/Au:S/C:N/I:N/A:C/E:H/RL:OF/RC:C [kaduk@mit.edu: CVE description and CVSS score] ticket: 8051 (new) target_version: 1.13.1 tags: pullup
d1f707024f1d0af6e54a18885322d70fa15ec4d3
krb5
bigvul
1
null
null
null
Return only new keys in randkey [CVE-2014-5351] In kadmind's randkey operation, if a client specifies the keepold flag, do not include the preserved old keys in the response. CVE-2014-5351: An authenticated remote attacker can retrieve the current keys for a service principal when generating a new set of keys for that principal. The attacker needs to be authenticated as a user who has the elevated privilege for randomizing the keys of other principals. Normally, when a Kerberos administrator randomizes the keys of a service principal, kadmind returns only the new keys. This prevents an administrator who lacks legitimate privileged access to a service from forging tickets to authenticate to that service. If the "keepold" flag to the kadmin randkey RPC operation is true, kadmind retains the old keys in the KDC database as intended, but also unexpectedly returns the old keys to the client, which exposes the service to ticket forgery attacks from the administrator. A mitigating factor is that legitimate clients of the affected service will start failing to authenticate to the service once they begin to receive service tickets encrypted in the new keys. The affected service will be unable to decrypt the newly issued tickets, possibly alerting the legitimate administrator of the affected service. CVSSv2: AV:N/AC:H/Au:S/C:P/I:N/A:N/E:POC/RL:OF/RC:C [tlyu@mit.edu: CVE description and CVSS score] ticket: 8018 (new) target_version: 1.13 tags: pullup
af0ed4df4dfae762ab5fb605f5a0c8f59cb4f6ca
krb5
bigvul
1
null
null
null
Request: new request session flag to mark those files opened by FDT This patch aims to fix a potential DDoS problem that can be caused in the server quering repetitive non-existent resources. When serving a static file, the core use Vhost FDT mechanism, but if it sends a static error page it does a direct open(2). When closing the resources for the same request it was just calling mk_vhost_close() which did not clear properly the file descriptor. This patch adds a new field on the struct session_request called 'fd_is_fdt', which contains MK_TRUE or MK_FALSE depending of how fd_file was opened. Thanks to Matthew Daley <mattd@bugfuzz.com> for report and troubleshoot this problem. Signed-off-by: Eduardo Silva <eduardo@monkey.io>
b2d0e6f92310bb14a15aa2f8e96e1fb5379776dd
monkey
bigvul
1
null
null
null
mnt: Correct permission checks in do_remount While invesgiating the issue where in "mount --bind -oremount,ro ..." would result in later "mount --bind -oremount,rw" succeeding even if the mount started off locked I realized that there are several additional mount flags that should be locked and are not. In particular MNT_NOSUID, MNT_NODEV, MNT_NOEXEC, and the atime flags in addition to MNT_READONLY should all be locked. These flags are all per superblock, can all be changed with MS_BIND, and should not be changable if set by a more privileged user. The following additions to the current logic are added in this patch. - nosuid may not be clearable by a less privileged user. - nodev may not be clearable by a less privielged user. - noexec may not be clearable by a less privileged user. - atime flags may not be changeable by a less privileged user. The logic with atime is that always setting atime on access is a global policy and backup software and auditing software could break if atime bits are not updated (when they are configured to be updated), and serious performance degradation could result (DOS attack) if atime updates happen when they have been explicitly disabled. Therefore an unprivileged user should not be able to mess with the atime bits set by a more privileged user. The additional restrictions are implemented with the addition of MNT_LOCK_NOSUID, MNT_LOCK_NODEV, MNT_LOCK_NOEXEC, and MNT_LOCK_ATIME mnt flags. Taken together these changes and the fixes for MNT_LOCK_READONLY should make it safe for an unprivileged user to create a user namespace and to call "mount --bind -o remount,... ..." without the danger of mount flags being changed maliciously. Cc: stable@vger.kernel.org Acked-by: Serge E. Hallyn <serge.hallyn@ubuntu.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
9566d6742852c527bf5af38af5cbb878dad75705
linux
bigvul
1
null
null
null
mnt: Only change user settable mount flags in remount Kenton Varda <kenton@sandstorm.io> discovered that by remounting a read-only bind mount read-only in a user namespace the MNT_LOCK_READONLY bit would be cleared, allowing an unprivileged user to the remount a read-only mount read-write. Correct this by replacing the mask of mount flags to preserve with a mask of mount flags that may be changed, and preserve all others. This ensures that any future bugs with this mask and remount will fail in an easy to detect way where new mount flags simply won't change. Cc: stable@vger.kernel.org Acked-by: Serge E. Hallyn <serge.hallyn@ubuntu.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
a6138db815df5ee542d848318e5dae681590fccd
linux
bigvul
1
null
null
null
net: sctp: inherit auth_capable on INIT collisions Jason reported an oops caused by SCTP on his ARM machine with SCTP authentication enabled: Internal error: Oops: 17 [#1] ARM CPU: 0 PID: 104 Comm: sctp-test Not tainted 3.13.0-68744-g3632f30c9b20-dirty #1 task: c6eefa40 ti: c6f52000 task.ti: c6f52000 PC is at sctp_auth_calculate_hmac+0xc4/0x10c LR is at sg_init_table+0x20/0x38 pc : [<c024bb80>] lr : [<c00f32dc>] psr: 40000013 sp : c6f538e8 ip : 00000000 fp : c6f53924 r10: c6f50d80 r9 : 00000000 r8 : 00010000 r7 : 00000000 r6 : c7be4000 r5 : 00000000 r4 : c6f56254 r3 : c00c8170 r2 : 00000001 r1 : 00000008 r0 : c6f1e660 Flags: nZcv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user Control: 0005397f Table: 06f28000 DAC: 00000015 Process sctp-test (pid: 104, stack limit = 0xc6f521c0) Stack: (0xc6f538e8 to 0xc6f54000) [...] Backtrace: [<c024babc>] (sctp_auth_calculate_hmac+0x0/0x10c) from [<c0249af8>] (sctp_packet_transmit+0x33c/0x5c8) [<c02497bc>] (sctp_packet_transmit+0x0/0x5c8) from [<c023e96c>] (sctp_outq_flush+0x7fc/0x844) [<c023e170>] (sctp_outq_flush+0x0/0x844) from [<c023ef78>] (sctp_outq_uncork+0x24/0x28) [<c023ef54>] (sctp_outq_uncork+0x0/0x28) from [<c0234364>] (sctp_side_effects+0x1134/0x1220) [<c0233230>] (sctp_side_effects+0x0/0x1220) from [<c02330b0>] (sctp_do_sm+0xac/0xd4) [<c0233004>] (sctp_do_sm+0x0/0xd4) from [<c023675c>] (sctp_assoc_bh_rcv+0x118/0x160) [<c0236644>] (sctp_assoc_bh_rcv+0x0/0x160) from [<c023d5bc>] (sctp_inq_push+0x6c/0x74) [<c023d550>] (sctp_inq_push+0x0/0x74) from [<c024a6b0>] (sctp_rcv+0x7d8/0x888) While we already had various kind of bugs in that area ec0223ec48a9 ("net: sctp: fix sctp_sf_do_5_1D_ce to verify if we/peer is AUTH capable") and b14878ccb7fa ("net: sctp: cache auth_enable per endpoint"), this one is a bit of a different kind. Giving a bit more background on why SCTP authentication is needed can be found in RFC4895: SCTP uses 32-bit verification tags to protect itself against blind attackers. These values are not changed during the lifetime of an SCTP association. Looking at new SCTP extensions, there is the need to have a method of proving that an SCTP chunk(s) was really sent by the original peer that started the association and not by a malicious attacker. To cause this bug, we're triggering an INIT collision between peers; normal SCTP handshake where both sides intent to authenticate packets contains RANDOM; CHUNKS; HMAC-ALGO parameters that are being negotiated among peers: ---------- INIT[RANDOM; CHUNKS; HMAC-ALGO] ----------> <------- INIT-ACK[RANDOM; CHUNKS; HMAC-ALGO] --------- -------------------- COOKIE-ECHO --------------------> <-------------------- COOKIE-ACK --------------------- RFC4895 says that each endpoint therefore knows its own random number and the peer's random number *after* the association has been established. The local and peer's random number along with the shared key are then part of the secret used for calculating the HMAC in the AUTH chunk. Now, in our scenario, we have 2 threads with 1 non-blocking SEQ_PACKET socket each, setting up common shared SCTP_AUTH_KEY and SCTP_AUTH_ACTIVE_KEY properly, and each of them calling sctp_bindx(3), listen(2) and connect(2) against each other, thus the handshake looks similar to this, e.g.: ---------- INIT[RANDOM; CHUNKS; HMAC-ALGO] ----------> <------- INIT-ACK[RANDOM; CHUNKS; HMAC-ALGO] --------- <--------- INIT[RANDOM; CHUNKS; HMAC-ALGO] ----------- -------- INIT-ACK[RANDOM; CHUNKS; HMAC-ALGO] --------> ... Since such collisions can also happen with verification tags, the RFC4895 for AUTH rather vaguely says under section 6.1: In case of INIT collision, the rules governing the handling of this Random Number follow the same pattern as those for the Verification Tag, as explained in Section 5.2.4 of RFC 2960 [5]. Therefore, each endpoint knows its own Random Number and the peer's Random Number after the association has been established. In RFC2960, section 5.2.4, we're eventually hitting Action B: B) In this case, both sides may be attempting to start an association at about the same time but the peer endpoint started its INIT after responding to the local endpoint's INIT. Thus it may have picked a new Verification Tag not being aware of the previous Tag it had sent this endpoint. The endpoint should stay in or enter the ESTABLISHED state but it MUST update its peer's Verification Tag from the State Cookie, stop any init or cookie timers that may running and send a COOKIE ACK. In other words, the handling of the Random parameter is the same as behavior for the Verification Tag as described in Action B of section 5.2.4. Looking at the code, we exactly hit the sctp_sf_do_dupcook_b() case which triggers an SCTP_CMD_UPDATE_ASSOC command to the side effect interpreter, and in fact it properly copies over peer_{random, hmacs, chunks} parameters from the newly created association to update the existing one. Also, the old asoc_shared_key is being released and based on the new params, sctp_auth_asoc_init_active_key() updated. However, the issue observed in this case is that the previous asoc->peer.auth_capable was 0, and has *not* been updated, so that instead of creating a new secret, we're doing an early return from the function sctp_auth_asoc_init_active_key() leaving asoc->asoc_shared_key as NULL. However, we now have to authenticate chunks from the updated chunk list (e.g. COOKIE-ACK). That in fact causes the server side when responding with ... <------------------ AUTH; COOKIE-ACK ----------------- ... to trigger a NULL pointer dereference, since in sctp_packet_transmit(), it discovers that an AUTH chunk is being queued for xmit, and thus it calls sctp_auth_calculate_hmac(). Since the asoc->active_key_id is still inherited from the endpoint, and the same as encoded into the chunk, it uses asoc->asoc_shared_key, which is still NULL, as an asoc_key and dereferences it in ... crypto_hash_setkey(desc.tfm, &asoc_key->data[0], asoc_key->len) ... causing an oops. All this happens because sctp_make_cookie_ack() called with the *new* association has the peer.auth_capable=1 and therefore marks the chunk with auth=1 after checking sctp_auth_send_cid(), but it is *actually* sent later on over the then *updated* association's transport that didn't initialize its shared key due to peer.auth_capable=0. Since control chunks in that case are not sent by the temporary association which are scheduled for deletion, they are issued for xmit via SCTP_CMD_REPLY in the interpreter with the context of the *updated* association. peer.auth_capable was 0 in the updated association (which went from COOKIE_WAIT into ESTABLISHED state), since all previous processing that performed sctp_process_init() was being done on temporary associations, that we eventually throw away each time. The correct fix is to update to the new peer.auth_capable value as well in the collision case via sctp_assoc_update(), so that in case the collision migrated from 0 -> 1, sctp_auth_asoc_init_active_key() can properly recalculate the secret. This therefore fixes the observed server panic. Fixes: 730fc3d05cd4 ("[SCTP]: Implete SCTP-AUTH parameter processing") Reported-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Tested-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Cc: Vlad Yasevich <vyasevich@gmail.com> Acked-by: Vlad Yasevich <vyasevich@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
1be9a950c646c9092fb3618197f7b6bfb50e82aa
linux
bigvul
1
null
null
null
fs: umount on symlink leaks mnt count Currently umount on symlink blocks following umount: /vz is separate mount # ls /vz/ -al | grep test drwxr-xr-x. 2 root root 4096 Jul 19 01:14 testdir lrwxrwxrwx. 1 root root 11 Jul 19 01:16 testlink -> /vz/testdir # umount -l /vz/testlink umount: /vz/testlink: not mounted (expected) # lsof /vz # umount /vz umount: /vz: device is busy. (unexpected) In this case mountpoint_last() gets an extra refcount on path->mnt Signed-off-by: Vasily Averin <vvs@openvz.org> Acked-by: Ian Kent <raven@themaw.net> Acked-by: Jeff Layton <jlayton@primarydata.com> Cc: stable@vger.kernel.org Signed-off-by: Christoph Hellwig <hch@lst.de>
295dc39d941dc2ae53d5c170365af4c9d5c16212
linux
bigvul
1
null
null
null
net/l2tp: don't fall back on UDP [get|set]sockopt The l2tp [get|set]sockopt() code has fallen back to the UDP functions for socket option levels != SOL_PPPOL2TP since day one, but that has never actually worked, since the l2tp socket isn't an inet socket. As David Miller points out: "If we wanted this to work, it'd have to look up the tunnel and then use tunnel->sk, but I wonder how useful that would be" Since this can never have worked so nobody could possibly have depended on that functionality, just remove the broken code and return -EINVAL. Reported-by: Sasha Levin <sasha.levin@oracle.com> Acked-by: James Chapman <jchapman@katalix.com> Acked-by: David Miller <davem@davemloft.net> Cc: Phil Turnbull <phil.turnbull@oracle.com> Cc: Vegard Nossum <vegard.nossum@oracle.com> Cc: Willy Tarreau <w@1wt.eu> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3cf521f7dc87c031617fd47e4b7aa2593c2f3daf
linux
bigvul
1
null
null
null
ptrace,x86: force IRET path after a ptrace_stop() The 'sysret' fastpath does not correctly restore even all regular registers, much less any segment registers or reflags values. That is very much part of why it's faster than 'iret'. Normally that isn't a problem, because the normal ptrace() interface catches the process using the signal handler infrastructure, which always returns with an iret. However, some paths can get caught using ptrace_event() instead of the signal path, and for those we need to make sure that we aren't going to return to user space using 'sysret'. Otherwise the modifications that may have been done to the register set by the tracer wouldn't necessarily take effect. Fix it by forcing IRET path by setting TIF_NOTIFY_RESUME from arch_ptrace_stop_needed() which is invoked from ptrace_stop(). Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Andy Lutomirski <luto@amacapital.net> Acked-by: Oleg Nesterov <oleg@redhat.com> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
b9cd18de4db3c9ffa7e17b0dc0ca99ed5aa4d43a
linux
bigvul
1
null
null
null
Prevent the LDAP validator from accepting an empty password.
fbda667221c51f0aa476a02366e0cf66cb012f88
webserver
bigvul
1
null
null
null
sctp: Fix sk_ack_backlog wrap-around problem Consider the scenario: For a TCP-style socket, while processing the COOKIE_ECHO chunk in sctp_sf_do_5_1D_ce(), after it has passed a series of sanity check, a new association would be created in sctp_unpack_cookie(), but afterwards, some processing maybe failed, and sctp_association_free() will be called to free the previously allocated association, in sctp_association_free(), sk_ack_backlog value is decremented for this socket, since the initial value for sk_ack_backlog is 0, after the decrement, it will be 65535, a wrap-around problem happens, and if we want to establish new associations afterward in the same socket, ABORT would be triggered since sctp deem the accept queue as full. Fix this issue by only decrementing sk_ack_backlog for associations in the endpoint's list. Fix-suggested-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Xufeng Zhang <xufeng.zhang@windriver.com> Acked-by: Daniel Borkmann <dborkman@redhat.com> Acked-by: Vlad Yasevich <vyasevich@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
d3217b15a19a4779c39b212358a5c71d725822ee
linux
bigvul
1
null
null
null
ALSA: control: Handle numid overflow Each control gets automatically assigned its numids when the control is created. The allocation is done by incrementing the numid by the amount of allocated numids per allocation. This means that excessive creation and destruction of controls (e.g. via SNDRV_CTL_IOCTL_ELEM_ADD/REMOVE) can cause the id to eventually overflow. Currently when this happens for the control that caused the overflow kctl->id.numid + kctl->count will also over flow causing it to be smaller than kctl->id.numid. Most of the code assumes that this is something that can not happen, so we need to make sure that it won't happen Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Acked-by: Jaroslav Kysela <perex@perex.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
ac902c112d90a89e59916f751c2745f4dbdbb4bd
linux
bigvul
1
null
null
null
ALSA: control: Fix replacing user controls There are two issues with the current implementation for replacing user controls. The first is that the code does not check if the control is actually a user control and neither does it check if the control is owned by the process that tries to remove it. That allows userspace applications to remove arbitrary controls, which can cause a user after free if a for example a driver does not expect a control to be removed from under its feed. The second issue is that on one hand when a control is replaced the user_ctl_count limit is not checked and on the other hand the user_ctl_count is increased (even though the number of user controls does not change). This allows userspace, once the user_ctl_count limit as been reached, to repeatedly replace a control until user_ctl_count overflows. Once that happens new controls can be added effectively bypassing the user_ctl_count limit. Both issues can be fixed by instead of open-coding the removal of the control that is to be replaced to use snd_ctl_remove_user_ctl(). This function does proper permission checks as well as decrements user_ctl_count after the control has been removed. Note that by using snd_ctl_remove_user_ctl() the check which returns -EBUSY at beginning of the function if the control already exists is removed. This is not a problem though since the check is quite useless, because the lock that is protecting the control list is released between the check and before adding the new control to the list, which means that it is possible that a different control with the same settings is added to the list after the check. Luckily there is another check that is done while holding the lock in snd_ctl_add(), so we'll rely on that to make sure that the same control is not added twice. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Acked-by: Jaroslav Kysela <perex@perex.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
82262a46627bebb0febcc26664746c25cef08563
linux
bigvul
1
null
null
null
ALSA: control: Don't access controls outside of protected regions A control that is visible on the card->controls list can be freed at any time. This means we must not access any of its memory while not holding the controls_rw_lock. Otherwise we risk a use after free access. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Acked-by: Jaroslav Kysela <perex@perex.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
fd9f26e4eca5d08a27d12c0933fceef76ed9663d
linux
bigvul
1
null
null
null
ALSA: control: Protect user controls against concurrent access The user-control put and get handlers as well as the tlv do not protect against concurrent access from multiple threads. Since the state of the control is not updated atomically it is possible that either two write operations or a write and a read operation race against each other. Both can lead to arbitrary memory disclosure. This patch introduces a new lock that protects user-controls from concurrent access. Since applications typically access controls sequentially than in parallel a single lock per card should be fine. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Acked-by: Jaroslav Kysela <perex@perex.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
07f4d9d74a04aa7c72c5dae0ef97565f28f17b92
linux
bigvul
1
null
null
null
lz4: ensure length does not wrap Given some pathologically compressed data, lz4 could possibly decide to wrap a few internal variables, causing unknown things to happen. Catch this before the wrapping happens and abort the decompression. Reported-by: "Don A. Bailey" <donb@securitymouse.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
206204a1162b995e2185275167b22468c00d6b36
linux
bigvul
1
null
null
null
lzo: properly check for overruns The lzo decompressor can, if given some really crazy data, possibly overrun some variable types. Modify the checking logic to properly detect overruns before they happen. Reported-by: "Don A. Bailey" <donb@securitymouse.com> Tested-by: "Don A. Bailey" <donb@securitymouse.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
206a81c18401c0cde6e579164f752c4b147324ce
linux
bigvul
1
null
null
null
stratum: parse_notify(): Don't die on malformed bbversion/prev_hash/nbit/ntime. Might have introduced a memory leak, don't have time to check. :( Should the other hex2bin()'s be checked? Thanks to Mick Ayzenberg <mick.dejavusecurity.com> for finding this.
910c36089940e81fb85c65b8e63dcd2fac71470c
sgminer
bigvul
1
null
null
null
Do some random sanity checking for stratum message parsing
e1c5050734123973b99d181c45e74b2cbb00272e
cgminer
bigvul
1
null
null
null
Stratum: extract_sockaddr: Truncate overlong addresses rather than stack overflow Thanks to Mick Ayzenberg <mick@dejavusecurity.com> for finding this!
b65574bef233474e915fdf18614aa211e31cc6c2
sgminer
bigvul
1
null
null
null
Fix LDAP key data segmentation [CVE-2014-4345] For principal entries having keys with multiple kvnos (due to use of -keepold), the LDAP KDB module makes an attempt to store all the keys having the same kvno into a single krbPrincipalKey attribute value. There is a fencepost error in the loop, causing currkvno to be set to the just-processed value instead of the next kvno. As a result, the second and all following groups of multiple keys by kvno are each stored in two krbPrincipalKey attribute values. Fix the loop to use the correct kvno value. CVE-2014-4345: In MIT krb5, when kadmind is configured to use LDAP for the KDC database, an authenticated remote attacker can cause it to perform an out-of-bounds write (buffer overrun) by performing multiple cpw -keepold operations. An off-by-one error while copying key information to the new database entry results in keys sharing a common kvno being written to different array buckets, in an array whose size is determined by the number of kvnos present. After sufficient iterations, the extra writes extend past the end of the (NULL-terminated) array. The NULL terminator is always written after the end of the loop, so no out-of-bounds data is read, it is only written. Historically, it has been possible to convert an out-of-bounds write into remote code execution in some cases, though the necessary exploits must be tailored to the individual application and are usually quite complicated. Depending on the allocated length of the array, an out-of-bounds write may also cause a segmentation fault and/or application crash. CVSSv2 Vector: AV:N/AC:M/Au:S/C:C/I:C/A:C/E:POC/RL:OF/RC:C [ghudson@mit.edu: clarified commit message] [kaduk@mit.edu: CVE summary, CVSSv2 vector] (cherry picked from commit 81c332e29f10887c6b9deb065f81ba259f4c7e03) ticket: 7980 version_fixed: 1.12.2 status: resolved
dc7ed55c689d57de7f7408b34631bf06fec9dab1
krb5
bigvul
1
null
null
null
Fix null deref in SPNEGO acceptor [CVE-2014-4344] When processing a continuation token, acc_ctx_cont was dereferencing the initial byte of the token without checking the length. This could result in a null dereference. CVE-2014-4344: In MIT krb5 1.5 and newer, an unauthenticated or partially authenticated remote attacker can cause a NULL dereference and application crash during a SPNEGO negotiation by sending an empty token as the second or later context token from initiator to acceptor. The attacker must provide at least one valid context token in the security context negotiation before sending the empty token. This can be done by an unauthenticated attacker by forcing SPNEGO to renegotiate the underlying mechanism, or by using IAKERB to wrap an unauthenticated AS-REQ as the first token. CVSSv2 Vector: AV:N/AC:L/Au:N/C:N/I:N/A:C/E:POC/RL:OF/RC:C [kaduk@mit.edu: CVE summary, CVSSv2 vector] (cherry picked from commit 524688ce87a15fc75f87efc8c039ba4c7d5c197b) ticket: 7970 version_fixed: 1.12.2 status: resolved
a7886f0ed1277c69142b14a2c6629175a6331edc
krb5
bigvul
1
null
null
null
Fix double-free in SPNEGO [CVE-2014-4343] In commit cd7d6b08 ("Verify acceptor's mech in SPNEGO initiator") the pointer sc->internal_mech became an alias into sc->mech_set->elements, which should be considered constant for the duration of the SPNEGO context. So don't free it. CVE-2014-4343: In MIT krb5 releases 1.10 and newer, an unauthenticated remote attacker with the ability to spoof packets appearing to be from a GSSAPI acceptor can cause a double-free condition in GSSAPI initiators (clients) which are using the SPNEGO mechanism, by returning a different underlying mechanism than was proposed by the initiator. At this stage of the negotiation, the acceptor is unauthenticated, and the acceptor's response could be spoofed by an attacker with the ability to inject traffic to the initiator. Historically, some double-free vulnerabilities can be translated into remote code execution, though the necessary exploits must be tailored to the individual application and are usually quite complicated. Double-frees can also be exploited to cause an application crash, for a denial of service. However, most GSSAPI client applications are not vulnerable, as the SPNEGO mechanism is not used by default (when GSS_C_NO_OID is passed as the mech_type argument to gss_init_sec_context()). The most common use of SPNEGO is for HTTP-Negotiate, used in web browsers and other web clients. Most such clients are believed to not offer HTTP-Negotiate by default, instead requiring a whitelist of sites for which it may be used to be configured. If the whitelist is configured to only allow HTTP-Negotiate over TLS connections ("https://"), a successful attacker must also spoof the web server's SSL certificate, due to the way the WWW-Authenticate header is sent in a 401 (Unauthorized) response message. Unfortunately, many instructions for enabling HTTP-Negotiate in common web browsers do not include a TLS requirement. CVSSv2 Vector: AV:N/AC:H/Au:N/C:C/I:C/A:C/E:POC/RL:OF/RC:C [kaduk@mit.edu: CVE summary and CVSSv2 vector] ticket: 7969 (new) target_version: 1.12.2 tags: pullup
f18ddf5d82de0ab7591a36e465bc24225776940f
krb5
bigvul
1
null
null
null
Handle invalid RFC 1964 tokens [CVE-2014-4341...] Detect the following cases which would otherwise cause invalid memory accesses and/or integer underflow: * An RFC 1964 token being processed by an RFC 4121-only context [CVE-2014-4342] * A header with fewer than 22 bytes after the token ID or an incomplete checksum [CVE-2014-4341 CVE-2014-4342] * A ciphertext shorter than the confounder [CVE-2014-4341] * A declared padding length longer than the plaintext [CVE-2014-4341] If we detect a bad pad byte, continue on to compute the checksum to avoid creating a padding oracle, but treat the checksum as invalid even if it compares equal. CVE-2014-4341: In MIT krb5, an unauthenticated remote attacker with the ability to inject packets into a legitimately established GSSAPI application session can cause a program crash due to invalid memory references when attempting to read beyond the end of a buffer. CVSSv2 Vector: AV:N/AC:M/Au:N/C:N/I:N/A:P/E:POC/RL:OF/RC:C CVE-2014-4342: In MIT krb5 releases krb5-1.7 and later, an unauthenticated remote attacker with the ability to inject packets into a legitimately established GSSAPI application session can cause a program crash due to invalid memory references when reading beyond the end of a buffer or by causing a null pointer dereference. CVSSv2 Vector: AV:N/AC:M/Au:N/C:N/I:N/A:P/E:POC/RL:OF/RC:C [tlyu@mit.edu: CVE summaries, CVSS] (cherry picked from commit fb99962cbd063ac04c9a9d2cc7c75eab73f3533d) ticket: 7949 version_fixed: 1.12.2 status: resolved
e6ae703ae597d798e310368d52b8f38ee11c6a73
krb5
bigvul
1
null
null
null
MIPS: asm: thread_info: Add _TIF_SECCOMP flag Add _TIF_SECCOMP flag to _TIF_WORK_SYSCALL_ENTRY to indicate that the system call needs to be checked against a seccomp filter. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Reviewed-by: Paul Burton <paul.burton@imgtec.com> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6405/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
137f7df8cead00688524c82360930845396b8a21
linux
bigvul
1
null
null
null
Merge branch 'PHP-5.6' * PHP-5.6: Fix potential segfault in dns_get_record()
b34d7849ed90ced9345f8ea1c59bc8d101c18468
php-src
bigvul
1
null
null
null
target/rd: Refactor rd_build_device_space + rd_release_device_space This patch refactors rd_build_device_space() + rd_release_device_space() into rd_allocate_sgl_table() + rd_release_device_space() so that they may be used seperatly for setup + release of protection information scatterlists. Also add explicit memset of pages within rd_allocate_sgl_table() based upon passed 'init_payload' value. v2 changes: - Drop unused sg_table from rd_release_device_space (Wei) Cc: Martin K. Petersen <martin.petersen@oracle.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Sagi Grimberg <sagig@mellanox.com> Cc: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
4442dc8a92b8f9ad8ee9e7f8438f4c04c03a22dc
linux
bigvul
1
null
null
null
fs,userns: Change inode_capable to capable_wrt_inode_uidgid The kernel has no concept of capabilities with respect to inodes; inodes exist independently of namespaces. For example, inode_capable(inode, CAP_LINUX_IMMUTABLE) would be nonsense. This patch changes inode_capable to check for uid and gid mappings and renames it to capable_wrt_inode_uidgid, which should make it more obvious what it does. Fixes CVE-2014-4014. Cc: Theodore Ts'o <tytso@mit.edu> Cc: Serge Hallyn <serge.hallyn@ubuntu.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Dave Chinner <david@fromorbit.com> Cc: stable@vger.kernel.org Signed-off-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
23adbe12ef7d3d4195e80800ab36b37bee28cd03
linux
bigvul
1
null
null
null
miniwget.c: fixed potential buffer overrun
3a87aa2f10bd7f1408e1849bdb59c41dd63a9fe9
miniupnp
bigvul
1
null
null
null
Don't use abstract Unix domain sockets
293d9d3f
libfep
bigvul
1
null
null
null
SERVER-13573 Fix x.509 auth exception
c151e0660b9736fe66b224f1129a16871165251b
mongo
bigvul
1
null
null
null
Fix note bounds reading, Francisco Alonso / Red Hat
39c7ac1106be844a5296d3eb5971946cc09ffda0
file
bigvul
1
null
null
null
x86,kvm,vmx: Preserve CR4 across VM entry CR4 isn't constant; at least the TSD and PCE bits can vary. TBH, treating CR0 and CR3 as constant scares me a bit, too, but it looks like it's correct. This adds a branch and a read from cr4 to each vm entry. Because it is extremely likely that consecutive entries into the same vcpu will have the same host cr4 value, this fixes up the vmcs instead of restoring cr4 after the fact. A subsequent patch will add a kernel-wide cr4 shadow, reducing the overhead in the common case to just two memory reads and a branch. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Cc: stable@vger.kernel.org Cc: Petr Matousek <pmatouse@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
d974baa398f34393db76be45f7d4d04fbdbb4a0a
linux
bigvul
1
null
null
null
net: sctp: fix remote memory pressure from excessive queueing This scenario is not limited to ASCONF, just taken as one example triggering the issue. When receiving ASCONF probes in the form of ... -------------- INIT[ASCONF; ASCONF_ACK] -------------> <----------- INIT-ACK[ASCONF; ASCONF_ACK] ------------ -------------------- COOKIE-ECHO --------------------> <-------------------- COOKIE-ACK --------------------- ---- ASCONF_a; [ASCONF_b; ...; ASCONF_n;] JUNK ------> [...] ---- ASCONF_m; [ASCONF_o; ...; ASCONF_z;] JUNK ------> ... where ASCONF_a, ASCONF_b, ..., ASCONF_z are good-formed ASCONFs and have increasing serial numbers, we process such ASCONF chunk(s) marked with !end_of_packet and !singleton, since we have not yet reached the SCTP packet end. SCTP does only do verification on a chunk by chunk basis, as an SCTP packet is nothing more than just a container of a stream of chunks which it eats up one by one. We could run into the case that we receive a packet with a malformed tail, above marked as trailing JUNK. All previous chunks are here goodformed, so the stack will eat up all previous chunks up to this point. In case JUNK does not fit into a chunk header and there are no more other chunks in the input queue, or in case JUNK contains a garbage chunk header, but the encoded chunk length would exceed the skb tail, or we came here from an entirely different scenario and the chunk has pdiscard=1 mark (without having had a flush point), it will happen, that we will excessively queue up the association's output queue (a correct final chunk may then turn it into a response flood when flushing the queue ;)): I ran a simple script with incremental ASCONF serial numbers and could see the server side consuming excessive amount of RAM [before/after: up to 2GB and more]. The issue at heart is that the chunk train basically ends with !end_of_packet and !singleton markers and since commit 2e3216cd54b1 ("sctp: Follow security requirement of responding with 1 packet") therefore preventing an output queue flush point in sctp_do_sm() -> sctp_cmd_interpreter() on the input chunk (chunk = event_arg) even though local_cork is set, but its precedence has changed since then. In the normal case, the last chunk with end_of_packet=1 would trigger the queue flush to accommodate possible outgoing bundling. In the input queue, sctp_inq_pop() seems to do the right thing in terms of discarding invalid chunks. So, above JUNK will not enter the state machine and instead be released and exit the sctp_assoc_bh_rcv() chunk processing loop. It's simply the flush point being missing at loop exit. Adding a try-flush approach on the output queue might not work as the underlying infrastructure might be long gone at this point due to the side-effect interpreter run. One possibility, albeit a bit of a kludge, would be to defer invalid chunk freeing into the state machine in order to possibly trigger packet discards and thus indirectly a queue flush on error. It would surely be better to discard chunks as in the current, perhaps better controlled environment, but going back and forth, it's simply architecturally not possible. I tried various trailing JUNK attack cases and it seems to look good now. Joint work with Vlad Yasevich. Fixes: 2e3216cd54b1 ("sctp: Follow security requirement of responding with 1 packet") Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Vlad Yasevich <vyasevich@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
26b87c7881006311828bb0ab271a551a62dcceb4
linux
bigvul
1
null
null
null
net: sctp: fix panic on duplicate ASCONF chunks When receiving a e.g. semi-good formed connection scan in the form of ... -------------- INIT[ASCONF; ASCONF_ACK] -------------> <----------- INIT-ACK[ASCONF; ASCONF_ACK] ------------ -------------------- COOKIE-ECHO --------------------> <-------------------- COOKIE-ACK --------------------- ---------------- ASCONF_a; ASCONF_b -----------------> ... where ASCONF_a equals ASCONF_b chunk (at least both serials need to be equal), we panic an SCTP server! The problem is that good-formed ASCONF chunks that we reply with ASCONF_ACK chunks are cached per serial. Thus, when we receive a same ASCONF chunk twice (e.g. through a lost ASCONF_ACK), we do not need to process them again on the server side (that was the idea, also proposed in the RFC). Instead, we know it was cached and we just resend the cached chunk instead. So far, so good. Where things get nasty is in SCTP's side effect interpreter, that is, sctp_cmd_interpreter(): While incoming ASCONF_a (chunk = event_arg) is being marked !end_of_packet and !singleton, and we have an association context, we do not flush the outqueue the first time after processing the ASCONF_ACK singleton chunk via SCTP_CMD_REPLY. Instead, we keep it queued up, although we set local_cork to 1. Commit 2e3216cd54b1 changed the precedence, so that as long as we get bundled, incoming chunks we try possible bundling on outgoing queue as well. Before this commit, we would just flush the output queue. Now, while ASCONF_a's ASCONF_ACK sits in the corked outq, we continue to process the same ASCONF_b chunk from the packet. As we have cached the previous ASCONF_ACK, we find it, grab it and do another SCTP_CMD_REPLY command on it. So, effectively, we rip the chunk->list pointers and requeue the same ASCONF_ACK chunk another time. Since we process ASCONF_b, it's correctly marked with end_of_packet and we enforce an uncork, and thus flush, thus crashing the kernel. Fix it by testing if the ASCONF_ACK is currently pending and if that is the case, do not requeue it. When flushing the output queue we may relink the chunk for preparing an outgoing packet, but eventually unlink it when it's copied into the skb right before transmission. Joint work with Vlad Yasevich. Fixes: 2e3216cd54b1 ("sctp: Follow security requirement of responding with 1 packet") Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Vlad Yasevich <vyasevich@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
b69040d8e39f20d5215a03502a8e8b4c6ab78395
linux
bigvul
1
null
null
null
net: sctp: fix skb_over_panic when receiving malformed ASCONF chunks Commit 6f4c618ddb0 ("SCTP : Add paramters validity check for ASCONF chunk") added basic verification of ASCONF chunks, however, it is still possible to remotely crash a server by sending a special crafted ASCONF chunk, even up to pre 2.6.12 kernels: skb_over_panic: text:ffffffffa01ea1c3 len:31056 put:30768 head:ffff88011bd81800 data:ffff88011bd81800 tail:0x7950 end:0x440 dev:<NULL> ------------[ cut here ]------------ kernel BUG at net/core/skbuff.c:129! [...] Call Trace: <IRQ> [<ffffffff8144fb1c>] skb_put+0x5c/0x70 [<ffffffffa01ea1c3>] sctp_addto_chunk+0x63/0xd0 [sctp] [<ffffffffa01eadaf>] sctp_process_asconf+0x1af/0x540 [sctp] [<ffffffff8152d025>] ? _read_unlock_bh+0x15/0x20 [<ffffffffa01e0038>] sctp_sf_do_asconf+0x168/0x240 [sctp] [<ffffffffa01e3751>] sctp_do_sm+0x71/0x1210 [sctp] [<ffffffff8147645d>] ? fib_rules_lookup+0xad/0xf0 [<ffffffffa01e6b22>] ? sctp_cmp_addr_exact+0x32/0x40 [sctp] [<ffffffffa01e8393>] sctp_assoc_bh_rcv+0xd3/0x180 [sctp] [<ffffffffa01ee986>] sctp_inq_push+0x56/0x80 [sctp] [<ffffffffa01fcc42>] sctp_rcv+0x982/0xa10 [sctp] [<ffffffffa01d5123>] ? ipt_local_in_hook+0x23/0x28 [iptable_filter] [<ffffffff8148bdc9>] ? nf_iterate+0x69/0xb0 [<ffffffff81496d10>] ? ip_local_deliver_finish+0x0/0x2d0 [<ffffffff8148bf86>] ? nf_hook_slow+0x76/0x120 [<ffffffff81496d10>] ? ip_local_deliver_finish+0x0/0x2d0 [<ffffffff81496ded>] ip_local_deliver_finish+0xdd/0x2d0 [<ffffffff81497078>] ip_local_deliver+0x98/0xa0 [<ffffffff8149653d>] ip_rcv_finish+0x12d/0x440 [<ffffffff81496ac5>] ip_rcv+0x275/0x350 [<ffffffff8145c88b>] __netif_receive_skb+0x4ab/0x750 [<ffffffff81460588>] netif_receive_skb+0x58/0x60 This can be triggered e.g., through a simple scripted nmap connection scan injecting the chunk after the handshake, for example, ... -------------- INIT[ASCONF; ASCONF_ACK] -------------> <----------- INIT-ACK[ASCONF; ASCONF_ACK] ------------ -------------------- COOKIE-ECHO --------------------> <-------------------- COOKIE-ACK --------------------- ------------------ ASCONF; UNKNOWN ------------------> ... where ASCONF chunk of length 280 contains 2 parameters ... 1) Add IP address parameter (param length: 16) 2) Add/del IP address parameter (param length: 255) ... followed by an UNKNOWN chunk of e.g. 4 bytes. Here, the Address Parameter in the ASCONF chunk is even missing, too. This is just an example and similarly-crafted ASCONF chunks could be used just as well. The ASCONF chunk passes through sctp_verify_asconf() as all parameters passed sanity checks, and after walking, we ended up successfully at the chunk end boundary, and thus may invoke sctp_process_asconf(). Parameter walking is done with WORD_ROUND() to take padding into account. In sctp_process_asconf()'s TLV processing, we may fail in sctp_process_asconf_param() e.g., due to removal of the IP address that is also the source address of the packet containing the ASCONF chunk, and thus we need to add all TLVs after the failure to our ASCONF response to remote via helper function sctp_add_asconf_response(), which basically invokes a sctp_addto_chunk() adding the error parameters to the given skb. When walking to the next parameter this time, we proceed with ... length = ntohs(asconf_param->param_hdr.length); asconf_param = (void *)asconf_param + length; ... instead of the WORD_ROUND()'ed length, thus resulting here in an off-by-one that leads to reading the follow-up garbage parameter length of 12336, and thus throwing an skb_over_panic for the reply when trying to sctp_addto_chunk() next time, which implicitly calls the skb_put() with that length. Fix it by using sctp_walk_params() [ which is also used in INIT parameter processing ] macro in the verification *and* in ASCONF processing: it will make sure we don't spill over, that we walk parameters WORD_ROUND()'ed. Moreover, we're being more defensive and guard against unknown parameter types and missized addresses. Joint work with Vlad Yasevich. Fixes: b896b82be4ae ("[SCTP] ADDIP: Support for processing incoming ASCONF_ACK chunks.") Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Vlad Yasevich <vyasevich@gmail.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
9de7922bc709eee2f609cd01d98aaedc4cf5ea74
linux
bigvul
1
null
null
null
KVM: x86: Handle errors when RIP is set during far jumps Far jmp/call/ret may fault while loading a new RIP. Currently KVM does not handle this case, and may result in failed vm-entry once the assignment is done. The tricky part of doing so is that loading the new CS affects the VMCS/VMCB state, so if we fail during loading the new RIP, we are left in unconsistent state. Therefore, this patch saves on 64-bit the old CS descriptor and restores it if loading RIP failed. This fixes CVE-2014-3647. Cc: stable@vger.kernel.org Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
d1442d85cc30ea75f7d399474ca738e0bc96f715
linux
bigvul
1
null
null
null
kvm: vmx: handle invvpid vm exit gracefully On systems with invvpid instruction support (corresponding bit in IA32_VMX_EPT_VPID_CAP MSR is set) guest invocation of invvpid causes vm exit, which is currently not handled and results in propagation of unknown exit to userspace. Fix this by installing an invvpid vm exit handler. This is CVE-2014-3646. Cc: stable@vger.kernel.org Signed-off-by: Petr Matousek <pmatouse@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
a642fc305053cc1c6e47e4f4df327895747ab485
linux
bigvul
1
null
null
null
nEPT: Nested INVEPT If we let L1 use EPT, we should probably also support the INVEPT instruction. In our current nested EPT implementation, when L1 changes its EPT table for L2 (i.e., EPT12), L0 modifies the shadow EPT table (EPT02), and in the course of this modification already calls INVEPT. But if last level of shadow page is unsync not all L1's changes to EPT12 are intercepted, which means roots need to be synced when L1 calls INVEPT. Global INVEPT should not be different since roots are synced by kvm_mmu_load() each time EPTP02 changes. Reviewed-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Nadav Har'El <nyh@il.ibm.com> Signed-off-by: Jun Nakajima <jun.nakajima@intel.com> Signed-off-by: Xinhao Xu <xinhao.xu@intel.com> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com> Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
bfd0a56b90005f8c8a004baf407ad90045c2b11e
linux
bigvul
1
null
null
null
KEYS: Fix termination condition in assoc array garbage collection This fixes CVE-2014-3631. It is possible for an associative array to end up with a shortcut node at the root of the tree if there are more than fan-out leaves in the tree, but they all crowd into the same slot in the lowest level (ie. they all have the same first nibble of their index keys). When assoc_array_gc() returns back up the tree after scanning some leaves, it can fall off of the root and crash because it assumes that the back pointer from a shortcut (after label ascend_old_tree) must point to a normal node - which isn't true of a shortcut node at the root. Should we find we're ascending rootwards over a shortcut, we should check to see if the backpointer is zero - and if it is, we have completed the scan. This particular bug cannot occur if the root node is not a shortcut - ie. if you have fewer than 17 keys in a keyring or if you have at least two keys that sit into separate slots (eg. a keyring and a non keyring). This can be reproduced by: ring=`keyctl newring bar @s` for ((i=1; i<=18; i++)); do last_key=`keyctl newring foo$i $ring`; done keyctl timeout $last_key 2 Doing this: echo 3 >/proc/sys/kernel/keys/gc_delay first will speed things up. If we do fall off of the top of the tree, we get the following oops: BUG: unable to handle kernel NULL pointer dereference at 0000000000000018 IP: [<ffffffff8136cea7>] assoc_array_gc+0x2f7/0x540 PGD dae15067 PUD cfc24067 PMD 0 Oops: 0000 [#1] SMP Modules linked in: xt_nat xt_mark nf_conntrack_netbios_ns nf_conntrack_broadcast ip6t_rpfilter ip6t_REJECT xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_ni CPU: 0 PID: 26011 Comm: kworker/0:1 Not tainted 3.14.9-200.fc20.x86_64 #1 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 Workqueue: events key_garbage_collector task: ffff8800918bd580 ti: ffff8800aac14000 task.ti: ffff8800aac14000 RIP: 0010:[<ffffffff8136cea7>] [<ffffffff8136cea7>] assoc_array_gc+0x2f7/0x540 RSP: 0018:ffff8800aac15d40 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff8800aaecacc0 RDX: ffff8800daecf440 RSI: 0000000000000001 RDI: ffff8800aadc2bc0 RBP: ffff8800aac15da8 R08: 0000000000000001 R09: 0000000000000003 R10: ffffffff8136ccc7 R11: 0000000000000000 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000070 R15: 0000000000000001 FS: 0000000000000000(0000) GS:ffff88011fc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 0000000000000018 CR3: 00000000db10d000 CR4: 00000000000006f0 Stack: ffff8800aac15d50 0000000000000011 ffff8800aac15db8 ffffffff812e2a70 ffff880091a00600 0000000000000000 ffff8800aadc2bc3 00000000cd42c987 ffff88003702df20 ffff88003702dfa0 0000000053b65c09 ffff8800aac15fd8 Call Trace: [<ffffffff812e2a70>] ? keyring_detect_cycle_iterator+0x30/0x30 [<ffffffff812e3e75>] keyring_gc+0x75/0x80 [<ffffffff812e1424>] key_garbage_collector+0x154/0x3c0 [<ffffffff810a67b6>] process_one_work+0x176/0x430 [<ffffffff810a744b>] worker_thread+0x11b/0x3a0 [<ffffffff810a7330>] ? rescuer_thread+0x3b0/0x3b0 [<ffffffff810ae1a8>] kthread+0xd8/0xf0 [<ffffffff810ae0d0>] ? insert_kthread_work+0x40/0x40 [<ffffffff816ffb7c>] ret_from_fork+0x7c/0xb0 [<ffffffff810ae0d0>] ? insert_kthread_work+0x40/0x40 Code: 08 4c 8b 22 0f 84 bf 00 00 00 41 83 c7 01 49 83 e4 fc 41 83 ff 0f 4c 89 65 c0 0f 8f 5a fe ff ff 48 8b 45 c0 4d 63 cf 49 83 c1 02 <4e> 8b 34 c8 4d 85 f6 0f 84 be 00 00 00 41 f6 c6 01 0f 84 92 RIP [<ffffffff8136cea7>] assoc_array_gc+0x2f7/0x540 RSP <ffff8800aac15d40> CR2: 0000000000000018 ---[ end trace 1129028a088c0cbd ]--- Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Don Zickus <dzickus@redhat.com> Signed-off-by: James Morris <james.l.morris@oracle.com>
95389b08d93d5c06ec63ab49bd732b0069b7c35e
linux
bigvul
1
null
null
null
KVM: x86: Improve thread safety in pit There's a race condition in the PIT emulation code in KVM. In __kvm_migrate_pit_timer the pit_timer object is accessed without synchronization. If the race condition occurs at the wrong time this can crash the host kernel. This fixes CVE-2014-3611. Cc: stable@vger.kernel.org Signed-off-by: Andrew Honig <ahonig@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2febc839133280d5a5e8e1179c94ea674489dae2
linux
bigvul
1
null
null
null
KVM: x86: Check non-canonical addresses upon WRMSR Upon WRMSR, the CPU should inject #GP if a non-canonical value (address) is written to certain MSRs. The behavior is "almost" identical for AMD and Intel (ignoring MSRs that are not implemented in either architecture since they would anyhow #GP). However, IA32_SYSENTER_ESP and IA32_SYSENTER_EIP cause #GP if non-canonical address is written on Intel but not on AMD (which ignores the top 32-bits). Accordingly, this patch injects a #GP on the MSRs which behave identically on Intel and AMD. To eliminate the differences between the architecutres, the value which is written to IA32_SYSENTER_ESP and IA32_SYSENTER_EIP is turned to canonical value before writing instead of injecting a #GP. Some references from Intel and AMD manuals: According to Intel SDM description of WRMSR instruction #GP is expected on WRMSR "If the source register contains a non-canonical address and ECX specifies one of the following MSRs: IA32_DS_AREA, IA32_FS_BASE, IA32_GS_BASE, IA32_KERNEL_GS_BASE, IA32_LSTAR, IA32_SYSENTER_EIP, IA32_SYSENTER_ESP." According to AMD manual instruction manual: LSTAR/CSTAR (SYSCALL): "The WRMSR instruction loads the target RIP into the LSTAR and CSTAR registers. If an RIP written by WRMSR is not in canonical form, a general-protection exception (#GP) occurs." IA32_GS_BASE and IA32_FS_BASE (WRFSBASE/WRGSBASE): "The address written to the base field must be in canonical form or a #GP fault will occur." IA32_KERNEL_GS_BASE (SWAPGS): "The address stored in the KernelGSbase MSR must be in canonical form." This patch fixes CVE-2014-3610. Cc: stable@vger.kernel.org Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
854e8bb1aa06c578c2c9145fa6bfe3680ef63b23
linux
bigvul
1
null
null
null
kvm: iommu: fix the third parameter of kvm_iommu_put_pages (CVE-2014-3601) The third parameter of kvm_iommu_put_pages is wrong, It should be 'gfn - slot->base_gfn'. By making gfn very large, malicious guest or userspace can cause kvm to go to this error path, and subsequently to pass a huge value as size. Alternatively if gfn is small, then pages would be pinned but never unpinned, causing host memory leak and local DOS. Passing a reasonable but large value could be the most dangerous case, because it would unpin a page that should have stayed pinned, and thus allow the device to DMA into arbitrary memory. However, this cannot happen because of the condition that can trigger the error: - out of memory (where you can't allocate even a single page) should not be possible for the attacker to trigger - when exceeding the iommu's address space, guest pages after gfn will also exceed the iommu's address space, and inside kvm_iommu_put_pages() the iommu_iova_to_phys() will fail. The page thus would not be unpinned at all. Reported-by: Jack Morgenstein <jackm@mellanox.com> Cc: stable@vger.kernel.org Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
350b8bdd689cd2ab2c67c8a86a0be86cfa0751a7
linux
bigvul
1
null
null
null
Fixed Sec Bug #67717 segfault in dns_get_record CVE-2014-3597 Incomplete fix for CVE-2014-4049 Check possible buffer overflow - pass real buffer end to dn_expand calls - check buffer len before each read
2fefae47716d501aec41c1102f3fd4531f070b05
php-src
bigvul
1
null
null
null
Fix bug #67716 - Segfault in cdf.c
7ba1409a1aee5925180de546057ddd84ff267947
php-src
bigvul
1
null
null
null
* Enforce limit of 8K on regex searches that have no limits * Allow the l modifier for regex to mean line count. Default to byte count. If line count is specified, assume a max of 80 characters per line to limit the byte count. * Don't allow conversions to be used for dates, allowing the mask field to be used as an offset. * Bump the version of the magic format so that regex changes are visible.
4a284c89d6ef11aca34da65da7d673050a5ea320
file
bigvul
1
null
null
null