14 lines
10 KiB
JSON
14 lines
10 KiB
JSON
{
|
||
"id": "openEuler-SA-2024-1298",
|
||
"url": "https://www.openeuler.org/zh/security/security-bulletins/detail/?id=openEuler-SA-2024-1298",
|
||
"title": "An update for kernel is now available for openEuler-22.03-LTS",
|
||
"severity": "High",
|
||
"description": "The Linux Kernel, the operating system core itself.\r\n\r\nSecurity Fix(es):\r\n\r\nIn the Linux kernel, the following vulnerability has been resolved:\r\n\r\nbtrfs: fix deadlock when cloning inline extents and using qgroups\r\n\r\nThere are a few exceptional cases where cloning an inline extent needs to\ncopy the inline extent data into a page of the destination inode.\r\n\r\nWhen this happens, we end up starting a transaction while having a dirty\npage for the destination inode and while having the range locked in the\ndestination's inode iotree too. Because when reserving metadata space\nfor a transaction we may need to flush existing delalloc in case there is\nnot enough free space, we have a mechanism in place to prevent a deadlock,\nwhich was introduced in commit 3d45f221ce627d (\"btrfs: fix deadlock when\ncloning inline extent and low on free metadata space\").\r\n\r\nHowever when using qgroups, a transaction also reserves metadata qgroup\nspace, which can also result in flushing delalloc in case there is not\nenough available space at the moment. When this happens we deadlock, since\nflushing delalloc requires locking the file range in the inode's iotree\nand the range was already locked at the very beginning of the clone\noperation, before attempting to start the transaction.\r\n\r\nWhen this issue happens, stack traces like the following are reported:\r\n\r\n [72747.556262] task:kworker/u81:9 state:D stack: 0 pid: 225 ppid: 2 flags:0x00004000\n [72747.556268] Workqueue: writeback wb_workfn (flush-btrfs-1142)\n [72747.556271] Call Trace:\n [72747.556273] __schedule+0x296/0x760\n [72747.556277] schedule+0x3c/0xa0\n [72747.556279] io_schedule+0x12/0x40\n [72747.556284] __lock_page+0x13c/0x280\n [72747.556287] ? generic_file_readonly_mmap+0x70/0x70\n [72747.556325] extent_write_cache_pages+0x22a/0x440 [btrfs]\n [72747.556331] ? __set_page_dirty_nobuffers+0xe7/0x160\n [72747.556358] ? set_extent_buffer_dirty+0x5e/0x80 [btrfs]\n [72747.556362] ? update_group_capacity+0x25/0x210\n [72747.556366] ? cpumask_next_and+0x1a/0x20\n [72747.556391] extent_writepages+0x44/0xa0 [btrfs]\n [72747.556394] do_writepages+0x41/0xd0\n [72747.556398] __writeback_single_inode+0x39/0x2a0\n [72747.556403] writeback_sb_inodes+0x1ea/0x440\n [72747.556407] __writeback_inodes_wb+0x5f/0xc0\n [72747.556410] wb_writeback+0x235/0x2b0\n [72747.556414] ? get_nr_inodes+0x35/0x50\n [72747.556417] wb_workfn+0x354/0x490\n [72747.556420] ? newidle_balance+0x2c5/0x3e0\n [72747.556424] process_one_work+0x1aa/0x340\n [72747.556426] worker_thread+0x30/0x390\n [72747.556429] ? create_worker+0x1a0/0x1a0\n [72747.556432] kthread+0x116/0x130\n [72747.556435] ? kthread_park+0x80/0x80\n [72747.556438] ret_from_fork+0x1f/0x30\r\n\r\n [72747.566958] Workqueue: btrfs-flush_delalloc btrfs_work_helper [btrfs]\n [72747.566961] Call Trace:\n [72747.566964] __schedule+0x296/0x760\n [72747.566968] ? finish_wait+0x80/0x80\n [72747.566970] schedule+0x3c/0xa0\n [72747.566995] wait_extent_bit.constprop.68+0x13b/0x1c0 [btrfs]\n [72747.566999] ? finish_wait+0x80/0x80\n [72747.567024] lock_extent_bits+0x37/0x90 [btrfs]\n [72747.567047] btrfs_invalidatepage+0x299/0x2c0 [btrfs]\n [72747.567051] ? find_get_pages_range_tag+0x2cd/0x380\n [72747.567076] __extent_writepage+0x203/0x320 [btrfs]\n [72747.567102] extent_write_cache_pages+0x2bb/0x440 [btrfs]\n [72747.567106] ? update_load_avg+0x7e/0x5f0\n [72747.567109] ? enqueue_entity+0xf4/0x6f0\n [72747.567134] extent_writepages+0x44/0xa0 [btrfs]\n [72747.567137] ? enqueue_task_fair+0x93/0x6f0\n [72747.567140] do_writepages+0x41/0xd0\n [72747.567144] __filemap_fdatawrite_range+0xc7/0x100\n [72747.567167] btrfs_run_delalloc_work+0x17/0x40 [btrfs]\n [72747.567195] btrfs_work_helper+0xc2/0x300 [btrfs]\n [72747.567200] process_one_work+0x1aa/0x340\n [72747.567202] worker_thread+0x30/0x390\n [72747.567205] ? create_worker+0x1a0/0x1a0\n [72747.567208] kthread+0x116/0x130\n [72747.567211] ? kthread_park+0x80/0x80\n [72747.567214] ret_from_fork+0x1f/0x30\r\n\r\n [72747.569686] task:fsstress state:D stack: \n---truncated---(CVE-2021-46987)\r\n\r\nIn the Linux kernel, the following vulnerability has been resolved:\r\n\r\nbpf: Defer the free of inner map when necessary\r\n\r\nWhen updating or deleting an inner map in map array or map htab, the map\nmay still be accessed by non-sleepable program or sleepable program.\nHowever bpf_map_fd_put_ptr() decreases the ref-counter of the inner map\ndirectly through bpf_map_put(), if the ref-counter is the last one\n(which is true for most cases), the inner map will be freed by\nops->map_free() in a kworker. But for now, most .map_free() callbacks\ndon't use synchronize_rcu() or its variants to wait for the elapse of a\nRCU grace period, so after the invocation of ops->map_free completes,\nthe bpf program which is accessing the inner map may incur\nuse-after-free problem.\r\n\r\nFix the free of inner map by invoking bpf_map_free_deferred() after both\none RCU grace period and one tasks trace RCU grace period if the inner\nmap has been removed from the outer map before. The deferment is\naccomplished by using call_rcu() or call_rcu_tasks_trace() when\nreleasing the last ref-counter of bpf map. The newly-added rcu_head\nfield in bpf_map shares the same storage space with work field to\nreduce the size of bpf_map.(CVE-2023-52447)\r\n\r\nIn the Linux kernel, the following vulnerability has been resolved:\r\n\r\ngfs2: Fix kernel NULL pointer dereference in gfs2_rgrp_dump\r\n\r\nSyzkaller has reported a NULL pointer dereference when accessing\nrgd->rd_rgl in gfs2_rgrp_dump(). This can happen when creating\nrgd->rd_gl fails in read_rindex_entry(). Add a NULL pointer check in\ngfs2_rgrp_dump() to prevent that.(CVE-2023-52448)\r\n\r\nIn the Linux kernel, the following vulnerability has been resolved:\r\n\r\nmtd: Fix gluebi NULL pointer dereference caused by ftl notifier\r\n\r\nIf both ftl.ko and gluebi.ko are loaded, the notifier of ftl\ntriggers NULL pointer dereference when trying to access\n‘gluebi->desc’ in gluebi_read().\r\n\r\nubi_gluebi_init\n ubi_register_volume_notifier\n ubi_enumerate_volumes\n ubi_notify_all\n gluebi_notify nb->notifier_call()\n gluebi_create\n mtd_device_register\n mtd_device_parse_register\n add_mtd_device\n blktrans_notify_add not->add()\n ftl_add_mtd tr->add_mtd()\n scan_header\n mtd_read\n mtd_read_oob\n mtd_read_oob_std\n gluebi_read mtd->read()\n gluebi->desc - NULL\r\n\r\nDetailed reproduction information available at the Link [1],\r\n\r\nIn the normal case, obtain gluebi->desc in the gluebi_get_device(),\nand access gluebi->desc in the gluebi_read(). However,\ngluebi_get_device() is not executed in advance in the\nftl_add_mtd() process, which leads to NULL pointer dereference.\r\n\r\nThe solution for the gluebi module is to run jffs2 on the UBI\nvolume without considering working with ftl or mtdblock [2].\nTherefore, this problem can be avoided by preventing gluebi from\ncreating the mtdblock device after creating mtd partition of the\ntype MTD_UBIVOLUME.(CVE-2023-52449)\r\n\r\nIn the Linux kernel, the following vulnerability has been resolved:\r\n\r\nbpf: Fix accesses to uninit stack slots\r\n\r\nPrivileged programs are supposed to be able to read uninitialized stack\nmemory (ever since 6715df8d5) but, before this patch, these accesses\nwere permitted inconsistently. In particular, accesses were permitted\nabove state->allocated_stack, but not below it. In other words, if the\nstack was already \"large enough\", the access was permitted, but\notherwise the access was rejected instead of being allowed to \"grow the\nstack\". This undesired rejection was happening in two places:\n- in check_stack_slot_within_bounds()\n- in check_stack_range_initialized()\nThis patch arranges for these accesses to be permitted. A bunch of tests\nthat were relying on the old rejection had to change; all of them were\nchanged to add also run unprivileged, in which case the old behavior\npersists. One tests couldn't be updated - global_func16 - because it\ncan't run unprivileged for other reasons.\r\n\r\nThis patch also fixes the tracking of the stack size for variable-offset\nreads. This second fix is bundled in the same commit as the first one\nbecause they're inter-related. Before this patch, writes to the stack\nusing registers containing a variable offset (as opposed to registers\nwith fixed, known values) were not properly contributing to the\nfunction's needed stack size. As a result, it was possible for a program\nto verify, but then to attempt to read out-of-bounds data at runtime\nbecause a too small stack had been allocated for it.\r\n\r\nEach function tracks the size of the stack it needs in\nbpf_subprog_info.stack_depth, which is maintained by\nupdate_stack_depth(). For regular memory accesses, check_mem_access()\nwas calling update_state_depth() but it was passing in only the fixed\npart of the offset register, ignoring the variable offset. This was\nincorrect; the minimum possible value of that register should be used\ninstead.\r\n\r\nThis tracking is now fixed by centralizing the tracking of stack size in\ngrow_stack_state(), and by lifting the calls to grow_stack_state() to\ncheck_stack_access_within_bounds() as suggested by Andrii. The code is\nnow simpler and more convincingly tracks the correct maximum stack size.\ncheck_stack_range_initialized() can now rely on enough stack having been\nallocated for the access; this helps with the fix for the first issue.\r\n\r\nA few tests were changed to also check the stack depth computation. The\none that fails without this patch is verifier_var_off:stack_write_priv_vs_unpriv.(CVE-2023-52452)",
|
||
"cves": [
|
||
{
|
||
"id": "CVE-2023-52452",
|
||
"url": "https://nvd.nist.gov/vuln/detail/CVE-2023-52452",
|
||
"severity": "Medium"
|
||
}
|
||
]
|
||
} |