Searched hist:2258 (Results 26 - 28 of 28) sorted by relevance

12

/linux-master/net/unix/
H A Daf_unix.cdiff e1d09c2c Tue May 09 18:34:56 MDT 2023 Kuniyuki Iwashima <kuniyu@amazon.com> af_unix: Fix data races around sk->sk_shutdown.

KCSAN found a data race around sk->sk_shutdown where unix_release_sock()
and unix_shutdown() update it under unix_state_lock(), OTOH unix_poll()
and unix_dgram_poll() read it locklessly.

We need to annotate the writes and reads with WRITE_ONCE() and READ_ONCE().

BUG: KCSAN: data-race in unix_poll / unix_release_sock

write to 0xffff88800d0f8aec of 1 bytes by task 264 on cpu 0:
unix_release_sock+0x75c/0x910 net/unix/af_unix.c:631
unix_release+0x59/0x80 net/unix/af_unix.c:1042
__sock_release+0x7d/0x170 net/socket.c:653
sock_close+0x19/0x30 net/socket.c:1397
__fput+0x179/0x5e0 fs/file_table.c:321
____fput+0x15/0x20 fs/file_table.c:349
task_work_run+0x116/0x1a0 kernel/task_work.c:179
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
exit_to_user_mode_loop kernel/entry/common.c:171 [inline]
exit_to_user_mode_prepare+0x174/0x180 kernel/entry/common.c:204
__syscall_exit_to_user_mode_work kernel/entry/common.c:286 [inline]
syscall_exit_to_user_mode+0x1a/0x30 kernel/entry/common.c:297
do_syscall_64+0x4b/0x90 arch/x86/entry/common.c:86
entry_SYSCALL_64_after_hwframe+0x72/0xdc

read to 0xffff88800d0f8aec of 1 bytes by task 222 on cpu 1:
unix_poll+0xa3/0x2a0 net/unix/af_unix.c:3170
sock_poll+0xcf/0x2b0 net/socket.c:1385
vfs_poll include/linux/poll.h:88 [inline]
ep_item_poll.isra.0+0x78/0xc0 fs/eventpoll.c:855
ep_send_events fs/eventpoll.c:1694 [inline]
ep_poll fs/eventpoll.c:1823 [inline]
do_epoll_wait+0x6c4/0xea0 fs/eventpoll.c:2258
__do_sys_epoll_wait fs/eventpoll.c:2270 [inline]
__se_sys_epoll_wait fs/eventpoll.c:2265 [inline]
__x64_sys_epoll_wait+0xcc/0x190 fs/eventpoll.c:2265
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3b/0x90 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x72/0xdc

value changed: 0x00 -> 0x03

Reported by Kernel Concurrency Sanitizer on:
CPU: 1 PID: 222 Comm: dbus-broker Not tainted 6.3.0-rc7-02330-gca6270c12e20 #2
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014

Fixes: 3c73419c09a5 ("af_unix: fix 'poll for write'/ connected DGRAM sockets")
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Michal Kubiak <michal.kubiak@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
/linux-master/net/core/
H A Drtnetlink.cdiff 0ff50e83 Tue May 23 05:20:28 MDT 2017 Alexander Potapenko <glider@google.com> net: rtnetlink: bail out from rtnl_fdb_dump() on parse error

rtnl_fdb_dump() failed to check the result of nlmsg_parse(), which led
to contents of |ifm| being uninitialized because nlh->nlmsglen was too
small to accommodate |ifm|. The uninitialized data may affect some
branches and result in unwanted effects, although kernel data doesn't
seem to leak to the userspace directly.

The bug has been detected with KMSAN and syzkaller.

For the record, here is the KMSAN report:

==================================================================
BUG: KMSAN: use of unitialized memory in rtnl_fdb_dump+0x5dc/0x1000
CPU: 0 PID: 1039 Comm: probe Not tainted 4.11.0-rc5+ #2727
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16
dump_stack+0x143/0x1b0 lib/dump_stack.c:52
kmsan_report+0x12a/0x180 mm/kmsan/kmsan.c:1007
__kmsan_warning_32+0x66/0xb0 mm/kmsan/kmsan_instr.c:491
rtnl_fdb_dump+0x5dc/0x1000 net/core/rtnetlink.c:3230
netlink_dump+0x84f/0x1190 net/netlink/af_netlink.c:2168
__netlink_dump_start+0xc97/0xe50 net/netlink/af_netlink.c:2258
netlink_dump_start ./include/linux/netlink.h:165
rtnetlink_rcv_msg+0xae9/0xb40 net/core/rtnetlink.c:4094
netlink_rcv_skb+0x339/0x5a0 net/netlink/af_netlink.c:2339
rtnetlink_rcv+0x83/0xa0 net/core/rtnetlink.c:4110
netlink_unicast_kernel net/netlink/af_netlink.c:1272
netlink_unicast+0x13b7/0x1480 net/netlink/af_netlink.c:1298
netlink_sendmsg+0x10b8/0x10f0 net/netlink/af_netlink.c:1844
sock_sendmsg_nosec net/socket.c:633
sock_sendmsg net/socket.c:643
___sys_sendmsg+0xd4b/0x10f0 net/socket.c:1997
__sys_sendmsg net/socket.c:2031
SYSC_sendmsg+0x2c6/0x3f0 net/socket.c:2042
SyS_sendmsg+0x87/0xb0 net/socket.c:2038
do_syscall_64+0x102/0x150 arch/x86/entry/common.c:285
entry_SYSCALL64_slow_path+0x25/0x25 arch/x86/entry/entry_64.S:246
RIP: 0033:0x401300
RSP: 002b:00007ffc3b0e6d58 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00000000004002b0 RCX: 0000000000401300
RDX: 0000000000000000 RSI: 00007ffc3b0e6d80 RDI: 0000000000000003
RBP: 00007ffc3b0e6e00 R08: 000000000000000b R09: 0000000000000004
R10: 000000000000000d R11: 0000000000000246 R12: 0000000000000000
R13: 00000000004065a0 R14: 0000000000406630 R15: 0000000000000000
origin: 000000008fe00056
save_stack_trace+0x59/0x60 arch/x86/kernel/stacktrace.c:59
kmsan_save_stack_with_flags mm/kmsan/kmsan.c:352
kmsan_internal_poison_shadow+0xb1/0x1a0 mm/kmsan/kmsan.c:247
kmsan_poison_shadow+0x6d/0xc0 mm/kmsan/kmsan.c:260
slab_alloc_node mm/slub.c:2743
__kmalloc_node_track_caller+0x1f4/0x390 mm/slub.c:4349
__kmalloc_reserve net/core/skbuff.c:138
__alloc_skb+0x2cd/0x740 net/core/skbuff.c:231
alloc_skb ./include/linux/skbuff.h:933
netlink_alloc_large_skb net/netlink/af_netlink.c:1144
netlink_sendmsg+0x934/0x10f0 net/netlink/af_netlink.c:1819
sock_sendmsg_nosec net/socket.c:633
sock_sendmsg net/socket.c:643
___sys_sendmsg+0xd4b/0x10f0 net/socket.c:1997
__sys_sendmsg net/socket.c:2031
SYSC_sendmsg+0x2c6/0x3f0 net/socket.c:2042
SyS_sendmsg+0x87/0xb0 net/socket.c:2038
do_syscall_64+0x102/0x150 arch/x86/entry/common.c:285
return_from_SYSCALL_64+0x0/0x6a arch/x86/entry/entry_64.S:246
==================================================================

and the reproducer:

==================================================================
#include <sys/socket.h>
#include <net/if_arp.h>
#include <linux/netlink.h>
#include <stdint.h>

int main()
{
int sock = socket(PF_NETLINK, SOCK_DGRAM | SOCK_NONBLOCK, 0);
struct msghdr msg;
memset(&msg, 0, sizeof(msg));
char nlmsg_buf[32];
memset(nlmsg_buf, 0, sizeof(nlmsg_buf));
struct nlmsghdr *nlmsg = nlmsg_buf;
nlmsg->nlmsg_len = 0x11;
nlmsg->nlmsg_type = 0x1e; // RTM_NEWROUTE = RTM_BASE + 0x0e
// type = 0x0e = 1110b
// kind = 2
nlmsg->nlmsg_flags = 0x101; // NLM_F_ROOT | NLM_F_REQUEST
nlmsg->nlmsg_seq = 0;
nlmsg->nlmsg_pid = 0;
nlmsg_buf[16] = (char)7;
struct iovec iov;
iov.iov_base = nlmsg_buf;
iov.iov_len = 17;
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
sendmsg(sock, &msg, 0);
return 0;
}
==================================================================

Signed-off-by: Alexander Potapenko <glider@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
/linux-master/fs/btrfs/
H A Ddisk-io.cdiff a9d2d4ad Sat Feb 08 00:33:08 MST 2014 Liu Bo <bo.li.liu@oracle.com> Btrfs: fix a lockdep warning when cleaning up aborted transaction

Given now we have 2 spinlock for management of delayed refs,
CONFIG_DEBUG_SPINLOCK=y helped me find this,

[ 4723.413809] BUG: spinlock wrong CPU on CPU#1, btrfs-transacti/2258
[ 4723.414882] lock: 0xffff880048377670, .magic: dead4ead, .owner: btrfs-transacti/2258, .owner_cpu: 2
[ 4723.417146] CPU: 1 PID: 2258 Comm: btrfs-transacti Tainted: G W O 3.12.0+ #4
[ 4723.421321] Call Trace:
[ 4723.421872] [<ffffffff81680fe7>] dump_stack+0x54/0x74
[ 4723.422753] [<ffffffff81681093>] spin_dump+0x8c/0x91
[ 4723.424979] [<ffffffff816810b9>] spin_bug+0x21/0x26
[ 4723.425846] [<ffffffff81323956>] do_raw_spin_unlock+0x66/0x90
[ 4723.434424] [<ffffffff81689bf7>] _raw_spin_unlock+0x27/0x40
[ 4723.438747] [<ffffffffa015da9e>] btrfs_cleanup_one_transaction+0x35e/0x710 [btrfs]
[ 4723.443321] [<ffffffffa015df54>] btrfs_cleanup_transaction+0x104/0x570 [btrfs]
[ 4723.444692] [<ffffffff810c1b5d>] ? trace_hardirqs_on_caller+0xfd/0x1c0
[ 4723.450336] [<ffffffff810c1c2d>] ? trace_hardirqs_on+0xd/0x10
[ 4723.451332] [<ffffffffa015e5ee>] transaction_kthread+0x22e/0x270 [btrfs]
[ 4723.452543] [<ffffffffa015e3c0>] ? btrfs_cleanup_transaction+0x570/0x570 [btrfs]
[ 4723.457833] [<ffffffff81079efa>] kthread+0xea/0xf0
[ 4723.458990] [<ffffffff81079e10>] ? kthread_create_on_node+0x140/0x140
[ 4723.460133] [<ffffffff81692aac>] ret_from_fork+0x7c/0xb0
[ 4723.460865] [<ffffffff81079e10>] ? kthread_create_on_node+0x140/0x140
[ 4723.496521] ------------[ cut here ]------------

----------------------------------------------------------------------

The reason is that we get to call cond_resched_lock(&head_ref->lock) while
still holding @delayed_refs->lock.

So it's different with __btrfs_run_delayed_refs(), where we do drop-acquire
dance before and after actually processing delayed refs.

Here we don't drop the lock, others are not able to add new delayed refs to
head_ref, so cond_resched_lock(&head_ref->lock) is not necessary here.

Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
diff a9d2d4ad Sat Feb 08 00:33:08 MST 2014 Liu Bo <bo.li.liu@oracle.com> Btrfs: fix a lockdep warning when cleaning up aborted transaction

Given now we have 2 spinlock for management of delayed refs,
CONFIG_DEBUG_SPINLOCK=y helped me find this,

[ 4723.413809] BUG: spinlock wrong CPU on CPU#1, btrfs-transacti/2258
[ 4723.414882] lock: 0xffff880048377670, .magic: dead4ead, .owner: btrfs-transacti/2258, .owner_cpu: 2
[ 4723.417146] CPU: 1 PID: 2258 Comm: btrfs-transacti Tainted: G W O 3.12.0+ #4
[ 4723.421321] Call Trace:
[ 4723.421872] [<ffffffff81680fe7>] dump_stack+0x54/0x74
[ 4723.422753] [<ffffffff81681093>] spin_dump+0x8c/0x91
[ 4723.424979] [<ffffffff816810b9>] spin_bug+0x21/0x26
[ 4723.425846] [<ffffffff81323956>] do_raw_spin_unlock+0x66/0x90
[ 4723.434424] [<ffffffff81689bf7>] _raw_spin_unlock+0x27/0x40
[ 4723.438747] [<ffffffffa015da9e>] btrfs_cleanup_one_transaction+0x35e/0x710 [btrfs]
[ 4723.443321] [<ffffffffa015df54>] btrfs_cleanup_transaction+0x104/0x570 [btrfs]
[ 4723.444692] [<ffffffff810c1b5d>] ? trace_hardirqs_on_caller+0xfd/0x1c0
[ 4723.450336] [<ffffffff810c1c2d>] ? trace_hardirqs_on+0xd/0x10
[ 4723.451332] [<ffffffffa015e5ee>] transaction_kthread+0x22e/0x270 [btrfs]
[ 4723.452543] [<ffffffffa015e3c0>] ? btrfs_cleanup_transaction+0x570/0x570 [btrfs]
[ 4723.457833] [<ffffffff81079efa>] kthread+0xea/0xf0
[ 4723.458990] [<ffffffff81079e10>] ? kthread_create_on_node+0x140/0x140
[ 4723.460133] [<ffffffff81692aac>] ret_from_fork+0x7c/0xb0
[ 4723.460865] [<ffffffff81079e10>] ? kthread_create_on_node+0x140/0x140
[ 4723.496521] ------------[ cut here ]------------

----------------------------------------------------------------------

The reason is that we get to call cond_resched_lock(&head_ref->lock) while
still holding @delayed_refs->lock.

So it's different with __btrfs_run_delayed_refs(), where we do drop-acquire
dance before and after actually processing delayed refs.

Here we don't drop the lock, others are not able to add new delayed refs to
head_ref, so cond_resched_lock(&head_ref->lock) is not necessary here.

Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
diff a9d2d4ad Sat Feb 08 00:33:08 MST 2014 Liu Bo <bo.li.liu@oracle.com> Btrfs: fix a lockdep warning when cleaning up aborted transaction

Given now we have 2 spinlock for management of delayed refs,
CONFIG_DEBUG_SPINLOCK=y helped me find this,

[ 4723.413809] BUG: spinlock wrong CPU on CPU#1, btrfs-transacti/2258
[ 4723.414882] lock: 0xffff880048377670, .magic: dead4ead, .owner: btrfs-transacti/2258, .owner_cpu: 2
[ 4723.417146] CPU: 1 PID: 2258 Comm: btrfs-transacti Tainted: G W O 3.12.0+ #4
[ 4723.421321] Call Trace:
[ 4723.421872] [<ffffffff81680fe7>] dump_stack+0x54/0x74
[ 4723.422753] [<ffffffff81681093>] spin_dump+0x8c/0x91
[ 4723.424979] [<ffffffff816810b9>] spin_bug+0x21/0x26
[ 4723.425846] [<ffffffff81323956>] do_raw_spin_unlock+0x66/0x90
[ 4723.434424] [<ffffffff81689bf7>] _raw_spin_unlock+0x27/0x40
[ 4723.438747] [<ffffffffa015da9e>] btrfs_cleanup_one_transaction+0x35e/0x710 [btrfs]
[ 4723.443321] [<ffffffffa015df54>] btrfs_cleanup_transaction+0x104/0x570 [btrfs]
[ 4723.444692] [<ffffffff810c1b5d>] ? trace_hardirqs_on_caller+0xfd/0x1c0
[ 4723.450336] [<ffffffff810c1c2d>] ? trace_hardirqs_on+0xd/0x10
[ 4723.451332] [<ffffffffa015e5ee>] transaction_kthread+0x22e/0x270 [btrfs]
[ 4723.452543] [<ffffffffa015e3c0>] ? btrfs_cleanup_transaction+0x570/0x570 [btrfs]
[ 4723.457833] [<ffffffff81079efa>] kthread+0xea/0xf0
[ 4723.458990] [<ffffffff81079e10>] ? kthread_create_on_node+0x140/0x140
[ 4723.460133] [<ffffffff81692aac>] ret_from_fork+0x7c/0xb0
[ 4723.460865] [<ffffffff81079e10>] ? kthread_create_on_node+0x140/0x140
[ 4723.496521] ------------[ cut here ]------------

----------------------------------------------------------------------

The reason is that we get to call cond_resched_lock(&head_ref->lock) while
still holding @delayed_refs->lock.

So it's different with __btrfs_run_delayed_refs(), where we do drop-acquire
dance before and after actually processing delayed refs.

Here we don't drop the lock, others are not able to add new delayed refs to
head_ref, so cond_resched_lock(&head_ref->lock) is not necessary here.

Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>

Completed in 776 milliseconds

12