This is the start of the stable review cycle for the 4.19.223 release. There are 38 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 29 Dec 2021 15:13:09 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.19.223-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.19.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 4.19.223-rc1
Rémi Denis-Courmont remi@remlab.net phonet/pep: refuse to enable an unbound pipe
Lin Ma linma@zju.edu.cn hamradio: improve the incomplete fix to avoid NPD
Lin Ma linma@zju.edu.cn hamradio: defer ax25 kfree after unregister_netdev
Lin Ma linma@zju.edu.cn ax25: NPD bug when detaching AX25 device
Guenter Roeck linux@roeck-us.net hwmon: (lm90) Do not report 'busy' status bit as alarm
Samuel Čavoj samuel@cavoj.net Input: i8042 - enable deferred probe quirk for ASUS UM325UA
Sean Christopherson seanjc@google.com KVM: VMX: Fix stale docs for kvm-intel.emulate_invalid_guest_state
Marian Postevca posteuca@mutex.one usb: gadget: u_ether: fix race in setting MAC address in setup phase
Chao Yu chao@kernel.org f2fs: fix to do sanity check on last xattr entry in __f2fs_setxattr()
Ard Biesheuvel ardb@kernel.org ARM: 9169/1: entry: fix Thumb2 bug in iWMMXt exception handling
Fabien Dessenne fabien.dessenne@foss.st.com pinctrl: stm32: consider the GPIO offset to expose all the GPIO lines
Andrew Cooper andrew.cooper3@citrix.com x86/pkey: Fix undefined behaviour with PKRU_WD_BIT
John David Anglin dave.anglin@bell.net parisc: Correct completer in lws start
Thadeu Lima de Souza Cascardo cascardo@canonical.com ipmi: fix initialization when workqueue allocation fails
Thadeu Lima de Souza Cascardo cascardo@canonical.com ipmi: bail out if init_srcu_struct fails
José Expósito jose.exposito89@gmail.com Input: atmel_mxt_ts - fix double free in mxt_read_info_block
Colin Ian King colin.i.king@gmail.com ALSA: drivers: opl3: Fix incorrect use of vp->state
Xiaoke Wang xkernel.wang@foxmail.com ALSA: jack: Check the return value of kstrdup()
Guenter Roeck linux@roeck-us.net hwmon: (lm90) Fix usage of CONFIG2 register in detect function
Jiasheng Jiang jiasheng@iscas.ac.cn sfc: falcon: Check null pointer of rx_queue->page_ring
Jiasheng Jiang jiasheng@iscas.ac.cn drivers: net: smc911x: Check for error irq
Jiasheng Jiang jiasheng@iscas.ac.cn fjes: Check for error irq
Fernando Fernandez Mancera ffmancera@riseup.net bonding: fix ad_actor_system option setting to default
Wu Bo wubo40@huawei.com ipmi: Fix UAF when uninstall ipmi_si and ipmi_msghandler module
Willem de Bruijn willemb@google.com net: skip virtio_net_hdr_set_proto if protocol already set
Willem de Bruijn willemb@google.com net: accept UFOv6 packages in virtio_net_hdr_to_skb
Jiasheng Jiang jiasheng@iscas.ac.cn qlcnic: potential dereference null pointer of rx_queue->page_ring
Ignacy Gawędzki ignacy.gawedzki@green-communications.fr netfilter: fix regression in looped (broad|multi)cast's MAC handling
José Expósito jose.exposito89@gmail.com IB/qib: Fix memory leak in qib_user_sdma_queue_pkts()
Dongliang Mu mudongliangabcd@gmail.com spi: change clk_disable_unprepare to clk_unprepare
Robert Marko robert.marko@sartura.hr arm64: dts: allwinner: orangepi-zero-plus: fix PHY mode
Benjamin Tissoires benjamin.tissoires@redhat.com HID: holtek: fix mouse probing
Paolo Valente paolo.valente@linaro.org block, bfq: fix use after free in bfq_bfqq_expire
Paolo Valente paolo.valente@linaro.org block, bfq: fix queue removal from weights tree
Paolo Valente paolo.valente@linaro.org block, bfq: fix decrement of num_active_groups
Federico Motta federico@willer.it block, bfq: fix asymmetric scenarios detection
Federico Motta federico@willer.it block, bfq: improve asymmetric scenarios detection
Greg Jesionowski jesionowskigreg@gmail.com net: usb: lan78xx: add Allied Telesis AT29M2-AF
-------------
Diffstat:
Documentation/admin-guide/kernel-parameters.txt | 8 +- Documentation/networking/bonding.txt | 11 +- Makefile | 4 +- arch/arm/kernel/entry-armv.S | 8 +- .../dts/allwinner/sun50i-h5-orangepi-zero-plus.dts | 2 +- arch/parisc/kernel/syscall.S | 2 +- arch/x86/include/asm/pgtable.h | 4 +- block/bfq-iosched.c | 287 +++++++++++++-------- block/bfq-iosched.h | 76 ++++-- block/bfq-wf2q.c | 56 ++-- drivers/char/ipmi/ipmi_msghandler.c | 21 +- drivers/hid/hid-holtek-mouse.c | 15 ++ drivers/hwmon/lm90.c | 8 +- drivers/infiniband/hw/qib/qib_user_sdma.c | 2 +- drivers/input/serio/i8042-x86ia64io.h | 7 + drivers/input/touchscreen/atmel_mxt_ts.c | 2 +- drivers/net/bonding/bond_options.c | 2 +- drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov.h | 2 +- .../ethernet/qlogic/qlcnic/qlcnic_sriov_common.c | 12 +- .../net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c | 4 +- drivers/net/ethernet/sfc/falcon/rx.c | 5 +- drivers/net/ethernet/smsc/smc911x.c | 5 + drivers/net/fjes/fjes_main.c | 5 + drivers/net/hamradio/mkiss.c | 5 +- drivers/net/usb/lan78xx.c | 6 + drivers/pinctrl/stm32/pinctrl-stm32.c | 8 +- drivers/spi/spi-armada-3700.c | 2 +- drivers/usb/gadget/function/u_ether.c | 15 +- fs/f2fs/xattr.c | 9 +- include/linux/virtio_net.h | 25 +- net/ax25/af_ax25.c | 4 +- net/netfilter/nfnetlink_log.c | 3 +- net/netfilter/nfnetlink_queue.c | 3 +- net/phonet/pep.c | 2 + sound/core/jack.c | 4 + sound/drivers/opl3/opl3_midi.c | 2 +- 36 files changed, 424 insertions(+), 212 deletions(-)
From: Greg Jesionowski jesionowskigreg@gmail.com
commit ef8a0f6eab1ca5d1a75c242c5c7b9d386735fa0a upstream.
This adds the vendor and product IDs for the AT29M2-AF which is a lan7801-based device.
Signed-off-by: Greg Jesionowski jesionowskigreg@gmail.com Link: https://lore.kernel.org/r/20211214221027.305784-1-jesionowskigreg@gmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/usb/lan78xx.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/drivers/net/usb/lan78xx.c +++ b/drivers/net/usb/lan78xx.c @@ -75,6 +75,8 @@ #define LAN7801_USB_PRODUCT_ID (0x7801) #define LAN78XX_EEPROM_MAGIC (0x78A5) #define LAN78XX_OTP_MAGIC (0x78F3) +#define AT29M2AF_USB_VENDOR_ID (0x07C9) +#define AT29M2AF_USB_PRODUCT_ID (0x0012)
#define MII_READ 1 #define MII_WRITE 0 @@ -4170,6 +4172,10 @@ static const struct usb_device_id produc /* LAN7801 USB Gigabit Ethernet Device */ USB_DEVICE(LAN78XX_USB_VENDOR_ID, LAN7801_USB_PRODUCT_ID), }, + { + /* ATM2-AF USB Gigabit Ethernet Device */ + USB_DEVICE(AT29M2AF_USB_VENDOR_ID, AT29M2AF_USB_PRODUCT_ID), + }, {}, }; MODULE_DEVICE_TABLE(usb, products);
From: Federico Motta federico@willer.it
commit 2d29c9f89fcd9bf408fcdaaf515c90a169f22ecd upstream.
bfq defines as asymmetric a scenario where an active entity, say E (representing either a single bfq_queue or a group of other entities), has a higher weight than some other entities. If the entity E does sync I/O in such a scenario, then bfq plugs the dispatch of the I/O of the other entities in the following situation: E is in service but temporarily has no pending I/O request. In fact, without this plugging, all the times that E stops being temporarily idle, it may find the internal queues of the storage device already filled with an out-of-control number of extra requests, from other entities. So E may have to wait for the service of these extra requests, before finally having its own requests served. This may easily break service guarantees, with E getting less than its fair share of the device throughput. Usually, the end result is that E gets the same fraction of the throughput as the other entities, instead of getting more, according to its higher weight.
Yet there are two other more subtle cases where E, even if its weight is actually equal to or even lower than the weight of any other active entities, may get less than its fair share of the throughput in case the above I/O plugging is not performed: 1. other entities issue larger requests than E; 2. other entities contain more active child entities than E (or in general tend to have more backlog than E).
In the first case, other entities may get more service than E because they get larger requests, than those of E, served during the temporary idle periods of E. In the second case, other entities get more service because, by having many child entities, they have many requests ready for dispatching while E is temporarily idle.
This commit addresses this issue by extending the definition of asymmetric scenario: a scenario is asymmetric when - active entities representing bfq_queues have differentiated weights, as in the original definition or (inclusive) - one or more entities representing groups of entities are active.
This broader definition makes sure that I/O plugging will be performed in all the above cases, provided that there is at least one active group. Of course, this definition is very coarse, so it will trigger I/O plugging also in cases where it is not needed, such as, e.g., multiple active entities with just one child each, and all with the same I/O-request size. The reason for this coarse definition is just that a finer-grained definition would be rather heavy to compute.
On the opposite end, even this new definition does not trigger I/O plugging in all cases where there is no active group, and all bfq_queues have the same weight. So, in these cases some unfairness may occur if there are asymmetries in I/O-request sizes. We made this choice because I/O plugging may lower throughput, and probably a user that has not created any group cares more about throughput than about perfect fairness. At any rate, as for possible applications that may care about service guarantees, bfq already guarantees a high responsiveness and a low latency to soft real-time applications automatically.
Signed-off-by: Federico Motta federico@willer.it Signed-off-by: Paolo Valente paolo.valente@linaro.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Yu Kuai yukuai3@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- block/bfq-iosched.c | 223 ++++++++++++++++++++++++++++------------------------ block/bfq-iosched.h | 27 ++---- block/bfq-wf2q.c | 36 ++++---- 3 files changed, 155 insertions(+), 131 deletions(-)
--- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -625,12 +625,13 @@ void bfq_pos_tree_add_move(struct bfq_da }
/* - * Tell whether there are active queues or groups with differentiated weights. + * Tell whether there are active queues with different weights or + * active groups. */ -static bool bfq_differentiated_weights(struct bfq_data *bfqd) +static bool bfq_varied_queue_weights_or_active_groups(struct bfq_data *bfqd) { /* - * For weights to differ, at least one of the trees must contain + * For queue weights to differ, queue_weights_tree must contain * at least two nodes. */ return (!RB_EMPTY_ROOT(&bfqd->queue_weights_tree) && @@ -638,9 +639,7 @@ static bool bfq_differentiated_weights(s bfqd->queue_weights_tree.rb_node->rb_right) #ifdef CONFIG_BFQ_GROUP_IOSCHED ) || - (!RB_EMPTY_ROOT(&bfqd->group_weights_tree) && - (bfqd->group_weights_tree.rb_node->rb_left || - bfqd->group_weights_tree.rb_node->rb_right) + (bfqd->num_active_groups > 0 #endif ); } @@ -658,26 +657,25 @@ static bool bfq_differentiated_weights(s * 3) all active groups at the same level in the groups tree have the same * number of children. * - * Unfortunately, keeping the necessary state for evaluating exactly the - * above symmetry conditions would be quite complex and time-consuming. - * Therefore this function evaluates, instead, the following stronger - * sub-conditions, for which it is much easier to maintain the needed - * state: + * Unfortunately, keeping the necessary state for evaluating exactly + * the last two symmetry sub-conditions above would be quite complex + * and time consuming. Therefore this function evaluates, instead, + * only the following stronger two sub-conditions, for which it is + * much easier to maintain the needed state: * 1) all active queues have the same weight, - * 2) all active groups have the same weight, - * 3) all active groups have at most one active child each. - * In particular, the last two conditions are always true if hierarchical - * support and the cgroups interface are not enabled, thus no state needs - * to be maintained in this case. + * 2) there are no active groups. + * In particular, the last condition is always true if hierarchical + * support or the cgroups interface are not enabled, thus no state + * needs to be maintained in this case. */ static bool bfq_symmetric_scenario(struct bfq_data *bfqd) { - return !bfq_differentiated_weights(bfqd); + return !bfq_varied_queue_weights_or_active_groups(bfqd); }
/* * If the weight-counter tree passed as input contains no counter for - * the weight of the input entity, then add that counter; otherwise just + * the weight of the input queue, then add that counter; otherwise just * increment the existing counter. * * Note that weight-counter trees contain few nodes in mostly symmetric @@ -688,25 +686,25 @@ static bool bfq_symmetric_scenario(struc * In most scenarios, the rate at which nodes are created/destroyed * should be low too. */ -void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_entity *entity, +void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq, struct rb_root *root) { + struct bfq_entity *entity = &bfqq->entity; struct rb_node **new = &(root->rb_node), *parent = NULL;
/* - * Do not insert if the entity is already associated with a + * Do not insert if the queue is already associated with a * counter, which happens if: - * 1) the entity is associated with a queue, - * 2) a request arrival has caused the queue to become both + * 1) a request arrival has caused the queue to become both * non-weight-raised, and hence change its weight, and * backlogged; in this respect, each of the two events * causes an invocation of this function, - * 3) this is the invocation of this function caused by the + * 2) this is the invocation of this function caused by the * second event. This second invocation is actually useless, * and we handle this fact by exiting immediately. More * efficient or clearer solutions might possibly be adopted. */ - if (entity->weight_counter) + if (bfqq->weight_counter) return;
while (*new) { @@ -716,7 +714,7 @@ void bfq_weights_tree_add(struct bfq_dat parent = *new;
if (entity->weight == __counter->weight) { - entity->weight_counter = __counter; + bfqq->weight_counter = __counter; goto inc_counter; } if (entity->weight < __counter->weight) @@ -725,66 +723,67 @@ void bfq_weights_tree_add(struct bfq_dat new = &((*new)->rb_right); }
- entity->weight_counter = kzalloc(sizeof(struct bfq_weight_counter), - GFP_ATOMIC); + bfqq->weight_counter = kzalloc(sizeof(struct bfq_weight_counter), + GFP_ATOMIC);
/* * In the unlucky event of an allocation failure, we just - * exit. This will cause the weight of entity to not be - * considered in bfq_differentiated_weights, which, in its - * turn, causes the scenario to be deemed wrongly symmetric in - * case entity's weight would have been the only weight making - * the scenario asymmetric. On the bright side, no unbalance - * will however occur when entity becomes inactive again (the - * invocation of this function is triggered by an activation - * of entity). In fact, bfq_weights_tree_remove does nothing - * if !entity->weight_counter. + * exit. This will cause the weight of queue to not be + * considered in bfq_varied_queue_weights_or_active_groups, + * which, in its turn, causes the scenario to be deemed + * wrongly symmetric in case bfqq's weight would have been + * the only weight making the scenario asymmetric. On the + * bright side, no unbalance will however occur when bfqq + * becomes inactive again (the invocation of this function + * is triggered by an activation of queue). In fact, + * bfq_weights_tree_remove does nothing if + * !bfqq->weight_counter. */ - if (unlikely(!entity->weight_counter)) + if (unlikely(!bfqq->weight_counter)) return;
- entity->weight_counter->weight = entity->weight; - rb_link_node(&entity->weight_counter->weights_node, parent, new); - rb_insert_color(&entity->weight_counter->weights_node, root); + bfqq->weight_counter->weight = entity->weight; + rb_link_node(&bfqq->weight_counter->weights_node, parent, new); + rb_insert_color(&bfqq->weight_counter->weights_node, root);
inc_counter: - entity->weight_counter->num_active++; + bfqq->weight_counter->num_active++; }
/* - * Decrement the weight counter associated with the entity, and, if the + * Decrement the weight counter associated with the queue, and, if the * counter reaches 0, remove the counter from the tree. * See the comments to the function bfq_weights_tree_add() for considerations * about overhead. */ void __bfq_weights_tree_remove(struct bfq_data *bfqd, - struct bfq_entity *entity, + struct bfq_queue *bfqq, struct rb_root *root) { - if (!entity->weight_counter) + if (!bfqq->weight_counter) return;
- entity->weight_counter->num_active--; - if (entity->weight_counter->num_active > 0) + bfqq->weight_counter->num_active--; + if (bfqq->weight_counter->num_active > 0) goto reset_entity_pointer;
- rb_erase(&entity->weight_counter->weights_node, root); - kfree(entity->weight_counter); + rb_erase(&bfqq->weight_counter->weights_node, root); + kfree(bfqq->weight_counter);
reset_entity_pointer: - entity->weight_counter = NULL; + bfqq->weight_counter = NULL; }
/* - * Invoke __bfq_weights_tree_remove on bfqq and all its inactive - * parent entities. + * Invoke __bfq_weights_tree_remove on bfqq and decrement the number + * of active groups for each queue's inactive parent entity. */ void bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq) { struct bfq_entity *entity = bfqq->entity.parent;
- __bfq_weights_tree_remove(bfqd, &bfqq->entity, + __bfq_weights_tree_remove(bfqd, bfqq, &bfqd->queue_weights_tree);
for_each_entity(entity) { @@ -798,17 +797,13 @@ void bfq_weights_tree_remove(struct bfq_ * next_in_service for details on why * in_service_entity must be checked too). * - * As a consequence, the weight of entity is - * not to be removed. In addition, if entity - * is active, then its parent entities are - * active as well, and thus their weights are - * not to be removed either. In the end, this - * loop must stop here. + * As a consequence, its parent entities are + * active as well, and thus this loop must + * stop here. */ break; } - __bfq_weights_tree_remove(bfqd, entity, - &bfqd->group_weights_tree); + bfqd->num_active_groups--; } }
@@ -3521,9 +3516,11 @@ static bool bfq_better_to_idle(struct bf * symmetric scenario where: * (i) each of these processes must get the same throughput as * the others; - * (ii) all these processes have the same I/O pattern - (either sequential or random). - * In fact, in such a scenario, the drive will tend to treat + * (ii) the I/O of each process has the same properties, in + * terms of locality (sequential or random), direction + * (reads or writes), request sizes, greediness + * (from I/O-bound to sporadic), and so on. + * In fact, in such a scenario, the drive tends to treat * the requests of each of these processes in about the same * way as the requests of the others, and thus to provide * each of these processes with about the same throughput @@ -3532,18 +3529,50 @@ static bool bfq_better_to_idle(struct bf * certainly needed to guarantee that bfqq receives its * assigned fraction of the device throughput (see [1] for * details). + * The problem is that idling may significantly reduce + * throughput with certain combinations of types of I/O and + * devices. An important example is sync random I/O, on flash + * storage with command queueing. So, unless bfqq falls in the + * above cases where idling also boosts throughput, it would + * be important to check conditions (i) and (ii) accurately, + * so as to avoid idling when not strictly needed for service + * guarantees. + * + * Unfortunately, it is extremely difficult to thoroughly + * check condition (ii). And, in case there are active groups, + * it becomes very difficult to check condition (i) too. In + * fact, if there are active groups, then, for condition (i) + * to become false, it is enough that an active group contains + * more active processes or sub-groups than some other active + * group. We address this issue with the following bi-modal + * behavior, implemented in the function + * bfq_symmetric_scenario(). * - * We address this issue by controlling, actually, only the - * symmetry sub-condition (i), i.e., provided that - * sub-condition (i) holds, idling is not performed, - * regardless of whether sub-condition (ii) holds. In other - * words, only if sub-condition (i) holds, then idling is + * If there are active groups, then the scenario is tagged as + * asymmetric, conservatively, without checking any of the + * conditions (i) and (ii). So the device is idled for bfqq. + * This behavior matches also the fact that groups are created + * exactly if controlling I/O (to preserve bandwidth and + * latency guarantees) is a primary concern. + * + * On the opposite end, if there are no active groups, then + * only condition (i) is actually controlled, i.e., provided + * that condition (i) holds, idling is not performed, + * regardless of whether condition (ii) holds. In other words, + * only if condition (i) does not hold, then idling is * allowed, and the device tends to be prevented from queueing - * many requests, possibly of several processes. The reason - * for not controlling also sub-condition (ii) is that we - * exploit preemption to preserve guarantees in case of - * symmetric scenarios, even if (ii) does not hold, as - * explained in the next two paragraphs. + * many requests, possibly of several processes. Since there + * are no active groups, then, to control condition (i) it is + * enough to check whether all active queues have the same + * weight. + * + * Not checking condition (ii) evidently exposes bfqq to the + * risk of getting less throughput than its fair share. + * However, for queues with the same weight, a further + * mechanism, preemption, mitigates or even eliminates this + * problem. And it does so without consequences on overall + * throughput. This mechanism and its benefits are explained + * in the next three paragraphs. * * Even if a queue, say Q, is expired when it remains idle, Q * can still preempt the new in-service queue if the next @@ -3557,11 +3586,7 @@ static bool bfq_better_to_idle(struct bf * idling allows the internal queues of the device to contain * many requests, and thus to reorder requests, we can rather * safely assume that the internal scheduler still preserves a - * minimum of mid-term fairness. The motivation for using - * preemption instead of idling is that, by not idling, - * service guarantees are preserved without minimally - * sacrificing throughput. In other words, both a high - * throughput and its desired distribution are obtained. + * minimum of mid-term fairness. * * More precisely, this preemption-based, idleless approach * provides fairness in terms of IOPS, and not sectors per @@ -3580,27 +3605,27 @@ static bool bfq_better_to_idle(struct bf * 1024/8 times as high as the service received by the other * queue. * - * On the other hand, device idling is performed, and thus - * pure sector-domain guarantees are provided, for the - * following queues, which are likely to need stronger - * throughput guarantees: weight-raised queues, and queues - * with a higher weight than other queues. When such queues - * are active, sub-condition (i) is false, which triggers - * device idling. + * The motivation for using preemption instead of idling (for + * queues with the same weight) is that, by not idling, + * service guarantees are preserved (completely or at least in + * part) without minimally sacrificing throughput. And, if + * there is no active group, then the primary expectation for + * this device is probably a high throughput. * - * According to the above considerations, the next variable is - * true (only) if sub-condition (i) holds. To compute the - * value of this variable, we not only use the return value of - * the function bfq_symmetric_scenario(), but also check - * whether bfqq is being weight-raised, because - * bfq_symmetric_scenario() does not take into account also - * weight-raised queues (see comments on - * bfq_weights_tree_add()). In particular, if bfqq is being - * weight-raised, it is important to idle only if there are - * other, non-weight-raised queues that may steal throughput - * to bfqq. Actually, we should be even more precise, and - * differentiate between interactive weight raising and - * soft real-time weight raising. + * We are now left only with explaining the additional + * compound condition that is checked below for deciding + * whether the scenario is asymmetric. To explain this + * compound condition, we need to add that the function + * bfq_symmetric_scenario checks the weights of only + * non-weight-raised queues, for efficiency reasons (see + * comments on bfq_weights_tree_add()). Then the fact that + * bfqq is weight-raised is checked explicitly here. More + * precisely, the compound condition below takes into account + * also the fact that, even if bfqq is being weight-raised, + * the scenario is still symmetric if all active queues happen + * to be weight-raised. Actually, we should be even more + * precise here, and differentiate between interactive weight + * raising and soft real-time weight raising. * * As a side note, it is worth considering that the above * device-idling countermeasures may however fail in the @@ -5422,7 +5447,7 @@ static int bfq_init_queue(struct request bfqd->idle_slice_timer.function = bfq_idle_slice_timer;
bfqd->queue_weights_tree = RB_ROOT; - bfqd->group_weights_tree = RB_ROOT; + bfqd->num_active_groups = 0;
INIT_LIST_HEAD(&bfqd->active_list); INIT_LIST_HEAD(&bfqd->idle_list); --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -108,15 +108,14 @@ struct bfq_sched_data { };
/** - * struct bfq_weight_counter - counter of the number of all active entities + * struct bfq_weight_counter - counter of the number of all active queues * with a given weight. */ struct bfq_weight_counter { - unsigned int weight; /* weight of the entities this counter refers to */ - unsigned int num_active; /* nr of active entities with this weight */ + unsigned int weight; /* weight of the queues this counter refers to */ + unsigned int num_active; /* nr of active queues with this weight */ /* - * Weights tree member (see bfq_data's @queue_weights_tree and - * @group_weights_tree) + * Weights tree member (see bfq_data's @queue_weights_tree) */ struct rb_node weights_node; }; @@ -151,8 +150,6 @@ struct bfq_weight_counter { struct bfq_entity { /* service_tree member */ struct rb_node rb_node; - /* pointer to the weight counter associated with this entity */ - struct bfq_weight_counter *weight_counter;
/* * Flag, true if the entity is on a tree (either the active or @@ -266,6 +263,9 @@ struct bfq_queue { /* entity representing this queue in the scheduler */ struct bfq_entity entity;
+ /* pointer to the weight counter associated with this entity */ + struct bfq_weight_counter *weight_counter; + /* maximum budget allowed from the feedback mechanism */ int max_budget; /* budget expiration (in jiffies) */ @@ -449,14 +449,9 @@ struct bfq_data { */ struct rb_root queue_weights_tree; /* - * rbtree of non-queue @bfq_entity weight counters, sorted by - * weight. Used to keep track of whether all @bfq_groups have - * the same weight. The tree contains one counter for each - * distinct weight associated to some active @bfq_group (see - * the comments to the functions bfq_weights_tree_[add|remove] - * for further details). + * number of groups with requests still waiting for completion */ - struct rb_root group_weights_tree; + unsigned int num_active_groups;
/* * Number of bfq_queues containing requests (including the @@ -854,10 +849,10 @@ struct bfq_queue *bic_to_bfqq(struct bfq void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync); struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic); void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq); -void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_entity *entity, +void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq, struct rb_root *root); void __bfq_weights_tree_remove(struct bfq_data *bfqd, - struct bfq_entity *entity, + struct bfq_queue *bfqq, struct rb_root *root); void bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq); --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -788,25 +788,29 @@ __bfq_entity_update_weight_prio(struct b new_weight = entity->orig_weight * (bfqq ? bfqq->wr_coeff : 1); /* - * If the weight of the entity changes, remove the entity - * from its old weight counter (if there is a counter - * associated with the entity), and add it to the counter - * associated with its new weight. + * If the weight of the entity changes, and the entity is a + * queue, remove the entity from its old weight counter (if + * there is a counter associated with the entity). */ if (prev_weight != new_weight) { - root = bfqq ? &bfqd->queue_weights_tree : - &bfqd->group_weights_tree; - __bfq_weights_tree_remove(bfqd, entity, root); + if (bfqq) { + root = &bfqd->queue_weights_tree; + __bfq_weights_tree_remove(bfqd, bfqq, root); + } else + bfqd->num_active_groups--; } entity->weight = new_weight; /* - * Add the entity to its weights tree only if it is - * not associated with a weight-raised queue. + * Add the entity, if it is not a weight-raised queue, + * to the counter associated with its new weight. */ - if (prev_weight != new_weight && - (bfqq ? bfqq->wr_coeff == 1 : 1)) - /* If we get here, root has been initialized. */ - bfq_weights_tree_add(bfqd, entity, root); + if (prev_weight != new_weight) { + if (bfqq && bfqq->wr_coeff == 1) { + /* If we get here, root has been initialized. */ + bfq_weights_tree_add(bfqd, bfqq, root); + } else + bfqd->num_active_groups++; + }
new_st->wsum += entity->weight;
@@ -1012,9 +1016,9 @@ static void __bfq_activate_entity(struct if (!bfq_entity_to_bfqq(entity)) { /* bfq_group */ struct bfq_group *bfqg = container_of(entity, struct bfq_group, entity); + struct bfq_data *bfqd = bfqg->bfqd;
- bfq_weights_tree_add(bfqg->bfqd, entity, - &bfqd->group_weights_tree); + bfqd->num_active_groups++; } #endif
@@ -1692,7 +1696,7 @@ void bfq_add_bfqq_busy(struct bfq_data *
if (!bfqq->dispatched) if (bfqq->wr_coeff == 1) - bfq_weights_tree_add(bfqd, &bfqq->entity, + bfq_weights_tree_add(bfqd, bfqq, &bfqd->queue_weights_tree);
if (bfqq->wr_coeff > 1)
From: Federico Motta federico@willer.it
commit 98fa7a3e001b21fb47c08af4304f40a3b0535cbd upstream.
Since commit 2d29c9f89fcd ("block, bfq: improve asymmetric scenarios detection"), a scenario is defined asymmetric when one of the following conditions holds: - active bfq_queues have different weights - one or more group of entities (bfq_queue or other groups of entities) are active bfq grants fairness and low latency also in such asymmetric scenarios, by plugging the dispatching of I/O if the bfq_queue in service happens to be temporarily idle. This plugging may lower throughput, so it is important to do it only when strictly needed.
By mistake, in commit '2d29c9f89fcd' ("block, bfq: improve asymmetric scenarios detection") the num_active_groups counter was firstly incremented and subsequently decremented at any entity (group or bfq_queue) weight change.
This is useless, because only transitions from active to inactive and vice versa matter for that counter. Unfortunately this is also incorrect in the following case: the entity at issue is a bfq_queue and it is under weight raising. In fact in this case there is a spurious increment of the num_active_groups counter.
This spurious increment may cause scenarios to be wrongly detected as asymmetric, thus causing useless plugging and loss of throughput.
This commit fixes this issue by simply removing the above useless and wrong increments and decrements.
Fixes: 2d29c9f89fcd ("block, bfq: improve asymmetric scenarios detection") Tested-by: Oleksandr Natalenko oleksandr@natalenko.name Signed-off-by: Federico Motta federico@willer.it Signed-off-by: Paolo Valente paolo.valente@linaro.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Yu Kuai yukuai3@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- block/bfq-wf2q.c | 18 ++++++------------ 1 file changed, 6 insertions(+), 12 deletions(-)
--- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -792,24 +792,18 @@ __bfq_entity_update_weight_prio(struct b * queue, remove the entity from its old weight counter (if * there is a counter associated with the entity). */ - if (prev_weight != new_weight) { - if (bfqq) { - root = &bfqd->queue_weights_tree; - __bfq_weights_tree_remove(bfqd, bfqq, root); - } else - bfqd->num_active_groups--; + if (prev_weight != new_weight && bfqq) { + root = &bfqd->queue_weights_tree; + __bfq_weights_tree_remove(bfqd, bfqq, root); } entity->weight = new_weight; /* * Add the entity, if it is not a weight-raised queue, * to the counter associated with its new weight. */ - if (prev_weight != new_weight) { - if (bfqq && bfqq->wr_coeff == 1) { - /* If we get here, root has been initialized. */ - bfq_weights_tree_add(bfqd, bfqq, root); - } else - bfqd->num_active_groups++; + if (prev_weight != new_weight && bfqq && bfqq->wr_coeff == 1) { + /* If we get here, root has been initialized. */ + bfq_weights_tree_add(bfqd, bfqq, root); }
new_st->wsum += entity->weight;
From: Paolo Valente paolo.valente@linaro.org
commit ba7aeae5539c7a7cccc4cf07a2bc61281a93c50e upstream.
Since commit '2d29c9f89fcd ("block, bfq: improve asymmetric scenarios detection")', if there are process groups with I/O requests waiting for completion, then BFQ tags the scenario as 'asymmetric'. This detection is needed for preserving service guarantees (for details, see comments on the computation * of the variable asymmetric_scenario in the function bfq_better_to_idle).
Unfortunately, commit '2d29c9f89fcd ("block, bfq: improve asymmetric scenarios detection")' contains an error exactly in the updating of the number of groups with I/O requests waiting for completion: if a group has more than one descendant process, then the above number of groups, which is renamed from num_active_groups to a more appropriate num_groups_with_pending_reqs by this commit, may happen to be wrongly decremented multiple times, namely every time one of the descendant processes gets all its pending I/O requests completed.
A correct, complete solution should work as follows. Consider a group that is inactive, i.e., that has no descendant process with pending I/O inside BFQ queues. Then suppose that num_groups_with_pending_reqs is still accounting for this group, because the group still has some descendant process with some I/O request still in flight. num_groups_with_pending_reqs should be decremented when the in-flight request of the last descendant process is finally completed (assuming that nothing else has changed for the group in the meantime, in terms of composition of the group and active/inactive state of child groups and processes). To accomplish this, an additional pending-request counter must be added to entities, and must be updated correctly.
To avoid this additional field and operations, this commit resorts to the following tradeoff between simplicity and accuracy: for an inactive group that is still counted in num_groups_with_pending_reqs, this commit decrements num_groups_with_pending_reqs when the first descendant process of the group remains with no request waiting for completion.
This simplified scheme provides a fix to the unbalanced decrements introduced by 2d29c9f89fcd. Since this error was also caused by lack of comments on this non-trivial issue, this commit also adds related comments.
Fixes: 2d29c9f89fcd ("block, bfq: improve asymmetric scenarios detection") Reported-by: Steven Barrett steven@liquorix.net Tested-by: Steven Barrett steven@liquorix.net Tested-by: Lucjan Lucjanov lucjan.lucjanov@gmail.com Reviewed-by: Federico Motta federico@willer.it Signed-off-by: Paolo Valente paolo.valente@linaro.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Yu Kuai yukuai3@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- block/bfq-iosched.c | 76 ++++++++++++++++++++++++++++++++++++---------------- block/bfq-iosched.h | 51 +++++++++++++++++++++++++++++++++- block/bfq-wf2q.c | 5 ++- 3 files changed, 107 insertions(+), 25 deletions(-)
--- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -639,7 +639,7 @@ static bool bfq_varied_queue_weights_or_ bfqd->queue_weights_tree.rb_node->rb_right) #ifdef CONFIG_BFQ_GROUP_IOSCHED ) || - (bfqd->num_active_groups > 0 + (bfqd->num_groups_with_pending_reqs > 0 #endif ); } @@ -803,7 +803,21 @@ void bfq_weights_tree_remove(struct bfq_ */ break; } - bfqd->num_active_groups--; + + /* + * The decrement of num_groups_with_pending_reqs is + * not performed immediately upon the deactivation of + * entity, but it is delayed to when it also happens + * that the first leaf descendant bfqq of entity gets + * all its pending requests completed. The following + * instructions perform this delayed decrement, if + * needed. See the comments on + * num_groups_with_pending_reqs for details. + */ + if (entity->in_groups_with_pending_reqs) { + entity->in_groups_with_pending_reqs = false; + bfqd->num_groups_with_pending_reqs--; + } } }
@@ -3544,27 +3558,44 @@ static bool bfq_better_to_idle(struct bf * fact, if there are active groups, then, for condition (i) * to become false, it is enough that an active group contains * more active processes or sub-groups than some other active - * group. We address this issue with the following bi-modal - * behavior, implemented in the function + * group. More precisely, for condition (i) to hold because of + * such a group, it is not even necessary that the group is + * (still) active: it is sufficient that, even if the group + * has become inactive, some of its descendant processes still + * have some request already dispatched but still waiting for + * completion. In fact, requests have still to be guaranteed + * their share of the throughput even after being + * dispatched. In this respect, it is easy to show that, if a + * group frequently becomes inactive while still having + * in-flight requests, and if, when this happens, the group is + * not considered in the calculation of whether the scenario + * is asymmetric, then the group may fail to be guaranteed its + * fair share of the throughput (basically because idling may + * not be performed for the descendant processes of the group, + * but it had to be). We address this issue with the + * following bi-modal behavior, implemented in the function * bfq_symmetric_scenario(). * - * If there are active groups, then the scenario is tagged as + * If there are groups with requests waiting for completion + * (as commented above, some of these groups may even be + * already inactive), then the scenario is tagged as * asymmetric, conservatively, without checking any of the * conditions (i) and (ii). So the device is idled for bfqq. * This behavior matches also the fact that groups are created - * exactly if controlling I/O (to preserve bandwidth and - * latency guarantees) is a primary concern. + * exactly if controlling I/O is a primary concern (to + * preserve bandwidth and latency guarantees). * - * On the opposite end, if there are no active groups, then - * only condition (i) is actually controlled, i.e., provided - * that condition (i) holds, idling is not performed, - * regardless of whether condition (ii) holds. In other words, - * only if condition (i) does not hold, then idling is - * allowed, and the device tends to be prevented from queueing - * many requests, possibly of several processes. Since there - * are no active groups, then, to control condition (i) it is - * enough to check whether all active queues have the same - * weight. + * On the opposite end, if there are no groups with requests + * waiting for completion, then only condition (i) is actually + * controlled, i.e., provided that condition (i) holds, idling + * is not performed, regardless of whether condition (ii) + * holds. In other words, only if condition (i) does not hold, + * then idling is allowed, and the device tends to be + * prevented from queueing many requests, possibly of several + * processes. Since there are no groups with requests waiting + * for completion, then, to control condition (i) it is enough + * to check just whether all the queues with requests waiting + * for completion also have the same weight. * * Not checking condition (ii) evidently exposes bfqq to the * risk of getting less throughput than its fair share. @@ -3622,10 +3653,11 @@ static bool bfq_better_to_idle(struct bf * bfqq is weight-raised is checked explicitly here. More * precisely, the compound condition below takes into account * also the fact that, even if bfqq is being weight-raised, - * the scenario is still symmetric if all active queues happen - * to be weight-raised. Actually, we should be even more - * precise here, and differentiate between interactive weight - * raising and soft real-time weight raising. + * the scenario is still symmetric if all queues with requests + * waiting for completion happen to be + * weight-raised. Actually, we should be even more precise + * here, and differentiate between interactive weight raising + * and soft real-time weight raising. * * As a side note, it is worth considering that the above * device-idling countermeasures may however fail in the @@ -5447,7 +5479,7 @@ static int bfq_init_queue(struct request bfqd->idle_slice_timer.function = bfq_idle_slice_timer;
bfqd->queue_weights_tree = RB_ROOT; - bfqd->num_active_groups = 0; + bfqd->num_groups_with_pending_reqs = 0;
INIT_LIST_HEAD(&bfqd->active_list); INIT_LIST_HEAD(&bfqd->idle_list); --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -196,6 +196,9 @@ struct bfq_entity {
/* flag, set to request a weight, ioprio or ioprio_class change */ int prio_changed; + + /* flag, set if the entity is counted in groups_with_pending_reqs */ + bool in_groups_with_pending_reqs; };
struct bfq_group; @@ -448,10 +451,54 @@ struct bfq_data { * bfq_weights_tree_[add|remove] for further details). */ struct rb_root queue_weights_tree; + /* - * number of groups with requests still waiting for completion + * Number of groups with at least one descendant process that + * has at least one request waiting for completion. Note that + * this accounts for also requests already dispatched, but not + * yet completed. Therefore this number of groups may differ + * (be larger) than the number of active groups, as a group is + * considered active only if its corresponding entity has + * descendant queues with at least one request queued. This + * number is used to decide whether a scenario is symmetric. + * For a detailed explanation see comments on the computation + * of the variable asymmetric_scenario in the function + * bfq_better_to_idle(). + * + * However, it is hard to compute this number exactly, for + * groups with multiple descendant processes. Consider a group + * that is inactive, i.e., that has no descendant process with + * pending I/O inside BFQ queues. Then suppose that + * num_groups_with_pending_reqs is still accounting for this + * group, because the group has descendant processes with some + * I/O request still in flight. num_groups_with_pending_reqs + * should be decremented when the in-flight request of the + * last descendant process is finally completed (assuming that + * nothing else has changed for the group in the meantime, in + * terms of composition of the group and active/inactive state of child + * groups and processes). To accomplish this, an additional + * pending-request counter must be added to entities, and must + * be updated correctly. To avoid this additional field and operations, + * we resort to the following tradeoff between simplicity and + * accuracy: for an inactive group that is still counted in + * num_groups_with_pending_reqs, we decrement + * num_groups_with_pending_reqs when the first descendant + * process of the group remains with no request waiting for + * completion. + * + * Even this simpler decrement strategy requires a little + * carefulness: to avoid multiple decrements, we flag a group, + * more precisely an entity representing a group, as still + * counted in num_groups_with_pending_reqs when it becomes + * inactive. Then, when the first descendant queue of the + * entity remains with no request waiting for completion, + * num_groups_with_pending_reqs is decremented, and this flag + * is reset. After this flag is reset for the entity, + * num_groups_with_pending_reqs won't be decremented any + * longer in case a new descendant queue of the entity remains + * with no request waiting for completion. */ - unsigned int num_active_groups; + unsigned int num_groups_with_pending_reqs;
/* * Number of bfq_queues containing requests (including the --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -1012,7 +1012,10 @@ static void __bfq_activate_entity(struct container_of(entity, struct bfq_group, entity); struct bfq_data *bfqd = bfqg->bfqd;
- bfqd->num_active_groups++; + if (!entity->in_groups_with_pending_reqs) { + entity->in_groups_with_pending_reqs = true; + bfqd->num_groups_with_pending_reqs++; + } } #endif
From: Paolo Valente paolo.valente@linaro.org
commit 9dee8b3b057e1da26f85f1842f2aaf3bb200fb94 upstream.
bfq maintains an ordered list, through a red-black tree, of unique weights of active bfq_queues. This list is used to detect whether there are active queues with differentiated weights. The weight of a queue is removed from the list when both the following two conditions become true:
(1) the bfq_queue is flagged as inactive (2) the has no in-flight request any longer;
Unfortunately, in the rare cases where condition (2) becomes true before condition (1), the removal fails, because the function to remove the weight of the queue (bfq_weights_tree_remove) is rightly invoked in the path that deactivates the bfq_queue, but mistakenly invoked *before* the function that actually performs the deactivation (bfq_deactivate_bfqq).
This commits moves the invocation of bfq_weights_tree_remove for condition (1) to after bfq_deactivate_bfqq. As a consequence of this move, it is necessary to add a further reference to the queue when the weight of a queue is added, because the queue might otherwise be freed before bfq_weights_tree_remove is invoked. This commit adds this reference and makes all related modifications.
Signed-off-by: Paolo Valente paolo.valente@linaro.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Yu Kuai yukuai3@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- block/bfq-iosched.c | 17 +++++++++++++---- block/bfq-wf2q.c | 6 +++--- 2 files changed, 16 insertions(+), 7 deletions(-)
--- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -748,6 +748,7 @@ void bfq_weights_tree_add(struct bfq_dat
inc_counter: bfqq->weight_counter->num_active++; + bfqq->ref++; }
/* @@ -772,6 +773,7 @@ void __bfq_weights_tree_remove(struct bf
reset_entity_pointer: bfqq->weight_counter = NULL; + bfq_put_queue(bfqq); }
/* @@ -783,9 +785,6 @@ void bfq_weights_tree_remove(struct bfq_ { struct bfq_entity *entity = bfqq->entity.parent;
- __bfq_weights_tree_remove(bfqd, bfqq, - &bfqd->queue_weights_tree); - for_each_entity(entity) { struct bfq_sched_data *sd = entity->my_sched_data;
@@ -819,6 +818,15 @@ void bfq_weights_tree_remove(struct bfq_ bfqd->num_groups_with_pending_reqs--; } } + + /* + * Next function is invoked last, because it causes bfqq to be + * freed if the following holds: bfqq is not in service and + * has no dispatched request. DO NOT use bfqq after the next + * function invocation. + */ + __bfq_weights_tree_remove(bfqd, bfqq, + &bfqd->queue_weights_tree); }
/* @@ -1012,7 +1020,8 @@ bfq_bfqq_resume_state(struct bfq_queue *
static int bfqq_process_refs(struct bfq_queue *bfqq) { - return bfqq->ref - bfqq->allocated - bfqq->entity.on_st; + return bfqq->ref - bfqq->allocated - bfqq->entity.on_st - + (bfqq->weight_counter != NULL); }
/* Empty burst list and add just bfqq (see comments on bfq_handle_burst) */ --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -1668,15 +1668,15 @@ void bfq_del_bfqq_busy(struct bfq_data *
bfqd->busy_queues--;
- if (!bfqq->dispatched) - bfq_weights_tree_remove(bfqd, bfqq); - if (bfqq->wr_coeff > 1) bfqd->wr_busy_queues--;
bfqg_stats_update_dequeue(bfqq_group(bfqq));
bfq_deactivate_bfqq(bfqd, bfqq, true, expiration); + + if (!bfqq->dispatched) + bfq_weights_tree_remove(bfqd, bfqq); }
/*
From: Paolo Valente paolo.valente@linaro.org
commit eed47d19d9362bdd958e4ab56af480b9dbf6b2b6 upstream.
The function bfq_bfqq_expire() invokes the function __bfq_bfqq_expire(), and the latter may free the in-service bfq-queue. If this happens, then no other instruction of bfq_bfqq_expire() must be executed, or a use-after-free will occur.
Basing on the assumption that __bfq_bfqq_expire() invokes bfq_put_queue() on the in-service bfq-queue exactly once, the queue is assumed to be freed if its refcounter is equal to one right before invoking __bfq_bfqq_expire().
But, since commit 9dee8b3b057e ("block, bfq: fix queue removal from weights tree") this assumption is false. __bfq_bfqq_expire() may also invoke bfq_weights_tree_remove() and, since commit 9dee8b3b057e ("block, bfq: fix queue removal from weights tree"), also the latter function may invoke bfq_put_queue(). So __bfq_bfqq_expire() may invoke bfq_put_queue() twice, and this is the actual case where the in-service queue may happen to be freed.
To address this issue, this commit moves the check on the refcounter of the queue right around the last bfq_put_queue() that may be invoked on the queue.
Fixes: 9dee8b3b057e ("block, bfq: fix queue removal from weights tree") Reported-by: Dmitrii Tcvetkov demfloro@demfloro.ru Reported-by: Douglas Anderson dianders@chromium.org Tested-by: Dmitrii Tcvetkov demfloro@demfloro.ru Tested-by: Douglas Anderson dianders@chromium.org Signed-off-by: Paolo Valente paolo.valente@linaro.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Yu Kuai yukuai3@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- block/bfq-iosched.c | 15 +++++++-------- block/bfq-iosched.h | 2 +- block/bfq-wf2q.c | 17 +++++++++++++++-- 3 files changed, 23 insertions(+), 11 deletions(-)
--- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -2816,7 +2816,7 @@ static void bfq_dispatch_remove(struct r bfq_remove_request(q, rq); }
-static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq) +static bool __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq) { /* * If this bfqq is shared between multiple processes, check @@ -2849,9 +2849,11 @@ static void __bfq_bfqq_expire(struct bfq /* * All in-service entities must have been properly deactivated * or requeued before executing the next function, which - * resets all in-service entites as no more in service. + * resets all in-service entities as no more in service. This + * may cause bfqq to be freed. If this happens, the next + * function returns true. */ - __bfq_bfqd_reset_in_service(bfqd); + return __bfq_bfqd_reset_in_service(bfqd); }
/** @@ -3256,7 +3258,6 @@ void bfq_bfqq_expire(struct bfq_data *bf bool slow; unsigned long delta = 0; struct bfq_entity *entity = &bfqq->entity; - int ref;
/* * Check whether the process is slow (see bfq_bfqq_is_slow). @@ -3325,10 +3326,8 @@ void bfq_bfqq_expire(struct bfq_data *bf * reason. */ __bfq_bfqq_recalc_budget(bfqd, bfqq, reason); - ref = bfqq->ref; - __bfq_bfqq_expire(bfqd, bfqq); - - if (ref == 1) /* bfqq is gone, no more actions on it */ + if (__bfq_bfqq_expire(bfqd, bfqq)) + /* bfqq is gone, no more actions on it */ return;
bfqq->injected_service = 0; --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -993,7 +993,7 @@ bool __bfq_deactivate_entity(struct bfq_ bool ins_into_idle_tree); bool next_queue_may_preempt(struct bfq_data *bfqd); struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd); -void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd); +bool __bfq_bfqd_reset_in_service(struct bfq_data *bfqd); void bfq_deactivate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq, bool ins_into_idle_tree, bool expiration); void bfq_activate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq); --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -1600,7 +1600,8 @@ struct bfq_queue *bfq_get_next_queue(str return bfqq; }
-void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd) +/* returns true if the in-service queue gets freed */ +bool __bfq_bfqd_reset_in_service(struct bfq_data *bfqd) { struct bfq_queue *in_serv_bfqq = bfqd->in_service_queue; struct bfq_entity *in_serv_entity = &in_serv_bfqq->entity; @@ -1624,8 +1625,20 @@ void __bfq_bfqd_reset_in_service(struct * service tree either, then release the service reference to * the queue it represents (taken with bfq_get_entity). */ - if (!in_serv_entity->on_st) + if (!in_serv_entity->on_st) { + /* + * If no process is referencing in_serv_bfqq any + * longer, then the service reference may be the only + * reference to the queue. If this is the case, then + * bfqq gets freed here. + */ + int ref = in_serv_bfqq->ref; bfq_put_queue(in_serv_bfqq); + if (ref == 1) + return true; + } + + return false; }
void bfq_deactivate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
From: Benjamin Tissoires benjamin.tissoires@redhat.com
commit 93a2207c254ca102ebbdae47b00f19bbfbfa7ecd upstream.
An overlook from the previous commit: we don't even parse or start the device, meaning that the device is not presented to user space.
Fixes: 93020953d0fa ("HID: check for valid USB device for many HID drivers") Cc: stable@vger.kernel.org Link: https://bugs.archlinux.org/task/73048 Link: https://bugzilla.kernel.org/show_bug.cgi?id=215341 Link: https://lore.kernel.org/r/e4efbf13-bd8d-0370-629b-6c80c0044b15@leemhuis.info... Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hid/hid-holtek-mouse.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+)
--- a/drivers/hid/hid-holtek-mouse.c +++ b/drivers/hid/hid-holtek-mouse.c @@ -68,8 +68,23 @@ static __u8 *holtek_mouse_report_fixup(s static int holtek_mouse_probe(struct hid_device *hdev, const struct hid_device_id *id) { + int ret; + if (!hid_is_usb(hdev)) return -EINVAL; + + ret = hid_parse(hdev); + if (ret) { + hid_err(hdev, "hid parse failed: %d\n", ret); + return ret; + } + + ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT); + if (ret) { + hid_err(hdev, "hw start failed: %d\n", ret); + return ret; + } + return 0; }
From: Robert Marko robert.marko@sartura.hr
[ Upstream commit 08d2061ff9c5319a07bf9ca6bbf11fdec68f704a ]
Orange Pi Zero Plus uses a Realtek RTL8211E RGMII Gigabit PHY, but its currently set to plain RGMII mode meaning that it doesn't introduce delays.
With this setup, TX packets are completely lost and changing the mode to RGMII-ID so the PHY will add delays internally fixes the issue.
Fixes: a7affb13b271 ("arm64: allwinner: H5: Add Xunlong Orange Pi Zero Plus") Acked-by: Chen-Yu Tsai wens@csie.org Tested-by: Ron Goossens rgoossens@gmail.com Tested-by: Samuel Holland samuel@sholland.org Signed-off-by: Robert Marko robert.marko@sartura.hr Signed-off-by: Maxime Ripard maxime@cerno.tech Link: https://lore.kernel.org/r/20211117140222.43692-1-robert.marko@sartura.hr Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/allwinner/sun50i-h5-orangepi-zero-plus.dts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h5-orangepi-zero-plus.dts b/arch/arm64/boot/dts/allwinner/sun50i-h5-orangepi-zero-plus.dts index 1238de25a9691..9b1789504f7a0 100644 --- a/arch/arm64/boot/dts/allwinner/sun50i-h5-orangepi-zero-plus.dts +++ b/arch/arm64/boot/dts/allwinner/sun50i-h5-orangepi-zero-plus.dts @@ -72,7 +72,7 @@ pinctrl-0 = <&emac_rgmii_pins>; phy-supply = <®_gmac_3v3>; phy-handle = <&ext_rgmii_phy>; - phy-mode = "rgmii"; + phy-mode = "rgmii-id"; status = "okay"; };
From: Dongliang Mu mudongliangabcd@gmail.com
[ Upstream commit db6689b643d8653092f5853751ea2cdbc299f8d3 ]
The corresponding API for clk_prepare is clk_unprepare, other than clk_disable_unprepare.
Fix this by changing clk_disable_unprepare to clk_unprepare.
Fixes: 5762ab71eb24 ("spi: Add support for Armada 3700 SPI Controller") Signed-off-by: Dongliang Mu mudongliangabcd@gmail.com Link: https://lore.kernel.org/r/20211206101931.2816597-1-mudongliangabcd@gmail.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/spi-armada-3700.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/spi/spi-armada-3700.c b/drivers/spi/spi-armada-3700.c index 7dcb14d303eb4..d8715954f4e08 100644 --- a/drivers/spi/spi-armada-3700.c +++ b/drivers/spi/spi-armada-3700.c @@ -912,7 +912,7 @@ static int a3700_spi_probe(struct platform_device *pdev) return 0;
error_clk: - clk_disable_unprepare(spi->clk); + clk_unprepare(spi->clk); error: spi_master_put(master); out:
From: José Expósito jose.exposito89@gmail.com
[ Upstream commit bee90911e0138c76ee67458ac0d58b38a3190f65 ]
The wrong goto label was used for the error case and missed cleanup of the pkt allocation.
Fixes: d39bf40e55e6 ("IB/qib: Protect from buffer overflow in struct qib_user_sdma_pkt fields") Link: https://lore.kernel.org/r/20211208175238.29983-1-jose.exposito89@gmail.com Addresses-Coverity-ID: 1493352 ("Resource leak") Signed-off-by: José Expósito jose.exposito89@gmail.com Acked-by: Mike Marciniszyn mike.marciniszyn@cornelisnetworks.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/qib/qib_user_sdma.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c index 47ed3ab25dc95..6e6730f036b03 100644 --- a/drivers/infiniband/hw/qib/qib_user_sdma.c +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -945,7 +945,7 @@ static int qib_user_sdma_queue_pkts(const struct qib_devdata *dd, &addrlimit) || addrlimit > type_max(typeof(pkt->addrlimit))) { ret = -EINVAL; - goto free_pbc; + goto free_pkt; } pkt->addrlimit = addrlimit;
From: Ignacy Gawędzki ignacy.gawedzki@green-communications.fr
[ Upstream commit ebb966d3bdfed581ecccbb4a7432341baf7619b4 ]
In commit 5648b5e1169f ("netfilter: nfnetlink_queue: fix OOB when mac header was cleared"), the test for non-empty MAC header introduced in commit 2c38de4c1f8da7 ("netfilter: fix looped (broad|multi)cast's MAC handling") has been replaced with a test for a set MAC header.
This breaks the case when the MAC header has been reset (using skb_reset_mac_header), as is the case with looped-back multicast packets. As a result, the packets ending up in NFQUEUE get a bogus hwaddr interpreted from the first bytes of the IP header.
This patch adds a test for a non-empty MAC header in addition to the test for a set MAC header. The same two tests are also implemented in nfnetlink_log.c, where the initial code of commit 2c38de4c1f8da7 ("netfilter: fix looped (broad|multi)cast's MAC handling") has not been touched, but where supposedly the same situation may happen.
Fixes: 5648b5e1169f ("netfilter: nfnetlink_queue: fix OOB when mac header was cleared") Signed-off-by: Ignacy Gawędzki ignacy.gawedzki@green-communications.fr Reviewed-by: Florian Westphal fw@strlen.de Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/nfnetlink_log.c | 3 ++- net/netfilter/nfnetlink_queue.c | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter/nfnetlink_log.c index 25298b3eb8546..17ca9a681d47b 100644 --- a/net/netfilter/nfnetlink_log.c +++ b/net/netfilter/nfnetlink_log.c @@ -509,7 +509,8 @@ __build_packet_message(struct nfnl_log_net *log, goto nla_put_failure;
if (indev && skb->dev && - skb->mac_header != skb->network_header) { + skb_mac_header_was_set(skb) && + skb_mac_header_len(skb) != 0) { struct nfulnl_msg_packet_hw phw; int len;
diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c index eb5a052d3b252..8955431f2ab26 100644 --- a/net/netfilter/nfnetlink_queue.c +++ b/net/netfilter/nfnetlink_queue.c @@ -566,7 +566,8 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue, goto nla_put_failure;
if (indev && entskb->dev && - skb_mac_header_was_set(entskb)) { + skb_mac_header_was_set(entskb) && + skb_mac_header_len(entskb) != 0) { struct nfqnl_msg_packet_hw phw; int len;
From: Jiasheng Jiang jiasheng@iscas.ac.cn
[ Upstream commit 60ec7fcfe76892a1479afab51ff17a4281923156 ]
The return value of kcalloc() needs to be checked. To avoid dereference of null pointer in case of the failure of alloc. Therefore, it might be better to change the return type of qlcnic_sriov_alloc_vlans() and return -ENOMEM when alloc fails and return 0 the others. Also, qlcnic_sriov_set_guest_vlan_mode() and __qlcnic_pci_sriov_enable() should deal with the return value of qlcnic_sriov_alloc_vlans().
Fixes: 154d0c810c53 ("qlcnic: VLAN enhancement for 84XX adapters") Signed-off-by: Jiasheng Jiang jiasheng@iscas.ac.cn Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov.h | 2 +- .../net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c | 12 +++++++++--- drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c | 4 +++- 3 files changed, 13 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov.h index 5f327659efa7a..85b688f60b876 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov.h +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov.h @@ -202,7 +202,7 @@ int qlcnic_sriov_get_vf_vport_info(struct qlcnic_adapter *, struct qlcnic_info *, u16); int qlcnic_sriov_cfg_vf_guest_vlan(struct qlcnic_adapter *, u16, u8); void qlcnic_sriov_free_vlans(struct qlcnic_adapter *); -void qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *); +int qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *); bool qlcnic_sriov_check_any_vlan(struct qlcnic_vf_info *); void qlcnic_sriov_del_vlan_id(struct qlcnic_sriov *, struct qlcnic_vf_info *, u16); diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c index 77e386ebff09c..98275f18a87b0 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c @@ -433,7 +433,7 @@ static int qlcnic_sriov_set_guest_vlan_mode(struct qlcnic_adapter *adapter, struct qlcnic_cmd_args *cmd) { struct qlcnic_sriov *sriov = adapter->ahw->sriov; - int i, num_vlans; + int i, num_vlans, ret; u16 *vlans;
if (sriov->allowed_vlans) @@ -444,7 +444,9 @@ static int qlcnic_sriov_set_guest_vlan_mode(struct qlcnic_adapter *adapter, dev_info(&adapter->pdev->dev, "Number of allowed Guest VLANs = %d\n", sriov->num_allowed_vlans);
- qlcnic_sriov_alloc_vlans(adapter); + ret = qlcnic_sriov_alloc_vlans(adapter); + if (ret) + return ret;
if (!sriov->any_vlan) return 0; @@ -2164,7 +2166,7 @@ static int qlcnic_sriov_vf_resume(struct qlcnic_adapter *adapter) return err; }
-void qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *adapter) +int qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *adapter) { struct qlcnic_sriov *sriov = adapter->ahw->sriov; struct qlcnic_vf_info *vf; @@ -2174,7 +2176,11 @@ void qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *adapter) vf = &sriov->vf_info[i]; vf->sriov_vlans = kcalloc(sriov->num_allowed_vlans, sizeof(*vf->sriov_vlans), GFP_KERNEL); + if (!vf->sriov_vlans) + return -ENOMEM; } + + return 0; }
void qlcnic_sriov_free_vlans(struct qlcnic_adapter *adapter) diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c index 50eaafa3eaba3..c9f2cd2462230 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c @@ -598,7 +598,9 @@ static int __qlcnic_pci_sriov_enable(struct qlcnic_adapter *adapter, if (err) goto del_flr_queue;
- qlcnic_sriov_alloc_vlans(adapter); + err = qlcnic_sriov_alloc_vlans(adapter); + if (err) + goto del_flr_queue;
return err;
From: Willem de Bruijn willemb@google.com
[ Upstream commit 7e5cced9ca84df52d874aca6b632f930b3dc5bc6 ]
Skb with skb->protocol 0 at the time of virtio_net_hdr_to_skb may have a protocol inferred from virtio_net_hdr with virtio_net_hdr_set_proto.
Unlike TCP, UDP does not have separate types for IPv4 and IPv6. Type VIRTIO_NET_HDR_GSO_UDP is guessed to be IPv4/UDP. As of the below commit, UFOv6 packets are dropped due to not matching the protocol as obtained from dev_parse_header_protocol.
Invert the test to take that L2 protocol field as starting point and pass both UFOv4 and UFOv6 for VIRTIO_NET_HDR_GSO_UDP.
Fixes: 924a9bc362a5 ("net: check if protocol extracted by virtio_net_hdr_set_proto is correct") Link: https://lore.kernel.org/netdev/CABcq3pG9GRCYqFDBAJ48H1vpnnX=41u+MhQnayF1ztLH... Reported-by: Andrew Melnichenko andrew@daynix.com Signed-off-by: Willem de Bruijn willemb@google.com Link: https://lore.kernel.org/r/20211220144901.2784030-1-willemdebruijn.kernel@gma... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/virtio_net.h | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-)
diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h index e7330a9a7d7dc..8874b278cd34a 100644 --- a/include/linux/virtio_net.h +++ b/include/linux/virtio_net.h @@ -7,6 +7,21 @@ #include <uapi/linux/udp.h> #include <uapi/linux/virtio_net.h>
+static inline bool virtio_net_hdr_match_proto(__be16 protocol, __u8 gso_type) +{ + switch (gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { + case VIRTIO_NET_HDR_GSO_TCPV4: + return protocol == cpu_to_be16(ETH_P_IP); + case VIRTIO_NET_HDR_GSO_TCPV6: + return protocol == cpu_to_be16(ETH_P_IPV6); + case VIRTIO_NET_HDR_GSO_UDP: + return protocol == cpu_to_be16(ETH_P_IP) || + protocol == cpu_to_be16(ETH_P_IPV6); + default: + return false; + } +} + static inline int virtio_net_hdr_set_proto(struct sk_buff *skb, const struct virtio_net_hdr *hdr) { @@ -88,9 +103,12 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb, if (!skb->protocol) { __be16 protocol = dev_parse_header_protocol(skb);
- virtio_net_hdr_set_proto(skb, hdr); - if (protocol && protocol != skb->protocol) + if (!protocol) + virtio_net_hdr_set_proto(skb, hdr); + else if (!virtio_net_hdr_match_proto(protocol, hdr->gso_type)) return -EINVAL; + else + skb->protocol = protocol; } retry: if (!skb_flow_dissect_flow_keys_basic(skb, &keys,
From: Willem de Bruijn willemb@google.com
[ Upstream commit 1ed1d592113959f00cc552c3b9f47ca2d157768f ]
virtio_net_hdr_set_proto infers skb->protocol from the virtio_net_hdr gso_type, to avoid packets getting dropped for lack of a proto type.
Its protocol choice is a guess, especially in the case of UFO, where the single VIRTIO_NET_HDR_GSO_UDP label covers both UFOv4 and UFOv6.
Skip this best effort if the field is already initialized. Whether explicitly from userspace, or implicitly based on an earlier call to dev_parse_header_protocol (which is more robust, but was introduced after this patch).
Fixes: 9d2f67e43b73 ("net/packet: fix packet drop as of virtio gso") Signed-off-by: Willem de Bruijn willemb@google.com Link: https://lore.kernel.org/r/20211220145027.2784293-1-willemdebruijn.kernel@gma... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/virtio_net.h | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h index 8874b278cd34a..faee73c084d49 100644 --- a/include/linux/virtio_net.h +++ b/include/linux/virtio_net.h @@ -25,6 +25,9 @@ static inline bool virtio_net_hdr_match_proto(__be16 protocol, __u8 gso_type) static inline int virtio_net_hdr_set_proto(struct sk_buff *skb, const struct virtio_net_hdr *hdr) { + if (skb->protocol) + return 0; + switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { case VIRTIO_NET_HDR_GSO_TCPV4: case VIRTIO_NET_HDR_GSO_UDP:
From: Wu Bo wubo40@huawei.com
[ Upstream commit ffb76a86f8096a8206be03b14adda6092e18e275 ]
Hi,
When testing install and uninstall of ipmi_si.ko and ipmi_msghandler.ko, the system crashed.
The log as follows: [ 141.087026] BUG: unable to handle kernel paging request at ffffffffc09b3a5a [ 141.087241] PGD 8fe4c0d067 P4D 8fe4c0d067 PUD 8fe4c0f067 PMD 103ad89067 PTE 0 [ 141.087464] Oops: 0010 [#1] SMP NOPTI [ 141.087580] CPU: 67 PID: 668 Comm: kworker/67:1 Kdump: loaded Not tainted 4.18.0.x86_64 #47 [ 141.088009] Workqueue: events 0xffffffffc09b3a40 [ 141.088009] RIP: 0010:0xffffffffc09b3a5a [ 141.088009] Code: Bad RIP value. [ 141.088009] RSP: 0018:ffffb9094e2c3e88 EFLAGS: 00010246 [ 141.088009] RAX: 0000000000000000 RBX: ffff9abfdb1f04a0 RCX: 0000000000000000 [ 141.088009] RDX: 0000000000000000 RSI: 0000000000000246 RDI: 0000000000000246 [ 141.088009] RBP: 0000000000000000 R08: ffff9abfffee3cb8 R09: 00000000000002e1 [ 141.088009] R10: ffffb9094cb73d90 R11: 00000000000f4240 R12: ffff9abfffee8700 [ 141.088009] R13: 0000000000000000 R14: ffff9abfdb1f04a0 R15: ffff9abfdb1f04a8 [ 141.088009] FS: 0000000000000000(0000) GS:ffff9abfffec0000(0000) knlGS:0000000000000000 [ 141.088009] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 141.088009] CR2: ffffffffc09b3a30 CR3: 0000008fe4c0a001 CR4: 00000000007606e0 [ 141.088009] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 141.088009] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 141.088009] PKRU: 55555554 [ 141.088009] Call Trace: [ 141.088009] ? process_one_work+0x195/0x390 [ 141.088009] ? worker_thread+0x30/0x390 [ 141.088009] ? process_one_work+0x390/0x390 [ 141.088009] ? kthread+0x10d/0x130 [ 141.088009] ? kthread_flush_work_fn+0x10/0x10 [ 141.088009] ? ret_from_fork+0x35/0x40] BUG: unable to handle kernel paging request at ffffffffc0b28a5a [ 200.223240] PGD 97fe00d067 P4D 97fe00d067 PUD 97fe00f067 PMD a580cbf067 PTE 0 [ 200.223464] Oops: 0010 [#1] SMP NOPTI [ 200.223579] CPU: 63 PID: 664 Comm: kworker/63:1 Kdump: loaded Not tainted 4.18.0.x86_64 #46 [ 200.224008] Workqueue: events 0xffffffffc0b28a40 [ 200.224008] RIP: 0010:0xffffffffc0b28a5a [ 200.224008] Code: Bad RIP value. [ 200.224008] RSP: 0018:ffffbf3c8e2a3e88 EFLAGS: 00010246 [ 200.224008] RAX: 0000000000000000 RBX: ffffa0799ad6bca0 RCX: 0000000000000000 [ 200.224008] RDX: 0000000000000000 RSI: 0000000000000246 RDI: 0000000000000246 [ 200.224008] RBP: 0000000000000000 R08: ffff9fe43fde3cb8 R09: 00000000000000d5 [ 200.224008] R10: ffffbf3c8cb53d90 R11: 00000000000f4240 R12: ffff9fe43fde8700 [ 200.224008] R13: 0000000000000000 R14: ffffa0799ad6bca0 R15: ffffa0799ad6bca8 [ 200.224008] FS: 0000000000000000(0000) GS:ffff9fe43fdc0000(0000) knlGS:0000000000000000 [ 200.224008] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 200.224008] CR2: ffffffffc0b28a30 CR3: 00000097fe00a002 CR4: 00000000007606e0 [ 200.224008] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 200.224008] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 200.224008] PKRU: 55555554 [ 200.224008] Call Trace: [ 200.224008] ? process_one_work+0x195/0x390 [ 200.224008] ? worker_thread+0x30/0x390 [ 200.224008] ? process_one_work+0x390/0x390 [ 200.224008] ? kthread+0x10d/0x130 [ 200.224008] ? kthread_flush_work_fn+0x10/0x10 [ 200.224008] ? ret_from_fork+0x35/0x40 [ 200.224008] kernel fault(0x1) notification starting on CPU 63 [ 200.224008] kernel fault(0x1) notification finished on CPU 63 [ 200.224008] CR2: ffffffffc0b28a5a [ 200.224008] ---[ end trace c82a412d93f57412 ]---
The reason is as follows: T1: rmmod ipmi_si. ->ipmi_unregister_smi() -> ipmi_bmc_unregister() -> __ipmi_bmc_unregister() -> kref_put(&bmc->usecount, cleanup_bmc_device); -> schedule_work(&bmc->remove_work);
T2: rmmod ipmi_msghandler. ipmi_msghander module uninstalled, and the module space will be freed.
T3: bmc->remove_work doing cleanup the bmc resource. -> cleanup_bmc_work() -> platform_device_unregister(&bmc->pdev); -> platform_device_del(pdev); -> device_del(&pdev->dev); -> kobject_uevent(&dev->kobj, KOBJ_REMOVE); -> kobject_uevent_env() -> dev_uevent() -> if (dev->type && dev->type->name)
'dev->type'(bmc_device_type) pointer space has freed when uninstall ipmi_msghander module, 'dev->type->name' cause the system crash.
drivers/char/ipmi/ipmi_msghandler.c: 2820 static const struct device_type bmc_device_type = { 2821 .groups = bmc_dev_attr_groups, 2822 };
Steps to reproduce: Add a time delay in cleanup_bmc_work() function, and uninstall ipmi_si and ipmi_msghandler module.
2910 static void cleanup_bmc_work(struct work_struct *work) 2911 { 2912 struct bmc_device *bmc = container_of(work, struct bmc_device, 2913 remove_work); 2914 int id = bmc->pdev.id; /* Unregister overwrites id */ 2915 2916 msleep(3000); <--- 2917 platform_device_unregister(&bmc->pdev); 2918 ida_simple_remove(&ipmi_bmc_ida, id); 2919 }
Use 'remove_work_wq' instead of 'system_wq' to solve this issues.
Fixes: b2cfd8ab4add ("ipmi: Rework device id and guid handling to catch changing BMCs") Signed-off-by: Wu Bo wubo40@huawei.com Message-Id: 1640070034-56671-1-git-send-email-wubo40@huawei.com Signed-off-by: Corey Minyard cminyard@mvista.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/char/ipmi/ipmi_msghandler.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c index 48929df7673b1..6db709e2c34b1 100644 --- a/drivers/char/ipmi/ipmi_msghandler.c +++ b/drivers/char/ipmi/ipmi_msghandler.c @@ -2863,7 +2863,7 @@ cleanup_bmc_device(struct kref *ref) * with removing the device attributes while reading a device * attribute. */ - schedule_work(&bmc->remove_work); + queue_work(remove_work_wq, &bmc->remove_work); }
/*
From: Fernando Fernandez Mancera ffmancera@riseup.net
[ Upstream commit 1c15b05baea71a5ff98235783e3e4ad227760876 ]
When 802.3ad bond mode is configured the ad_actor_system option is set to "00:00:00:00:00:00". But when trying to set the all-zeroes MAC as actors' system address it was failing with EINVAL.
An all-zeroes ethernet address is valid, only multicast addresses are not valid values.
Fixes: 171a42c38c6e ("bonding: add netlink support for sys prio, actor sys mac, and port key") Signed-off-by: Fernando Fernandez Mancera ffmancera@riseup.net Acked-by: Jay Vosburgh jay.vosburgh@canonical.com Link: https://lore.kernel.org/r/20211221111345.2462-1-ffmancera@riseup.net Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- Documentation/networking/bonding.txt | 11 ++++++----- drivers/net/bonding/bond_options.c | 2 +- 2 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/Documentation/networking/bonding.txt b/Documentation/networking/bonding.txt index d3e5dd26db12d..4035a495c0606 100644 --- a/Documentation/networking/bonding.txt +++ b/Documentation/networking/bonding.txt @@ -191,11 +191,12 @@ ad_actor_sys_prio ad_actor_system
In an AD system, this specifies the mac-address for the actor in - protocol packet exchanges (LACPDUs). The value cannot be NULL or - multicast. It is preferred to have the local-admin bit set for this - mac but driver does not enforce it. If the value is not given then - system defaults to using the masters' mac address as actors' system - address. + protocol packet exchanges (LACPDUs). The value cannot be a multicast + address. If the all-zeroes MAC is specified, bonding will internally + use the MAC of the bond itself. It is preferred to have the + local-admin bit set for this mac but driver does not enforce it. If + the value is not given then system defaults to using the masters' + mac address as actors' system address.
This parameter has effect only in 802.3ad mode and is available through SysFs interface. diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c index 80867bd8f44c3..c9aa28eee191d 100644 --- a/drivers/net/bonding/bond_options.c +++ b/drivers/net/bonding/bond_options.c @@ -1439,7 +1439,7 @@ static int bond_option_ad_actor_system_set(struct bonding *bond, mac = (u8 *)&newval->value; }
- if (!is_valid_ether_addr(mac)) + if (is_multicast_ether_addr(mac)) goto err;
netdev_dbg(bond->dev, "Setting ad_actor_system to %pM\n", mac);
From: Jiasheng Jiang jiasheng@iscas.ac.cn
[ Upstream commit db6d6afe382de5a65d6ccf51253ab48b8e8336c3 ]
I find that platform_get_irq() will not always succeed. It will return error irq in case of the failure. Therefore, it might be better to check it if order to avoid the use of error irq.
Fixes: 658d439b2292 ("fjes: Introduce FUJITSU Extended Socket Network Device driver") Signed-off-by: Jiasheng Jiang jiasheng@iscas.ac.cn Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/fjes/fjes_main.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/drivers/net/fjes/fjes_main.c b/drivers/net/fjes/fjes_main.c index 778d3729f460a..89b3bc389f469 100644 --- a/drivers/net/fjes/fjes_main.c +++ b/drivers/net/fjes/fjes_main.c @@ -1284,6 +1284,11 @@ static int fjes_probe(struct platform_device *plat_dev) hw->hw_res.start = res->start; hw->hw_res.size = resource_size(res); hw->hw_res.irq = platform_get_irq(plat_dev, 0); + if (hw->hw_res.irq < 0) { + err = hw->hw_res.irq; + goto err_free_control_wq; + } + err = fjes_hw_init(&adapter->hw); if (err) goto err_free_control_wq;
From: Jiasheng Jiang jiasheng@iscas.ac.cn
[ Upstream commit cb93b3e11d405f20a405a07482d01147ef4934a3 ]
Because platform_get_irq() could fail and return error irq. Therefore, it might be better to check it if order to avoid the use of error irq.
Fixes: ae150435b59e ("smsc: Move the SMC (SMSC) drivers") Signed-off-by: Jiasheng Jiang jiasheng@iscas.ac.cn Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/smsc/smc911x.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/drivers/net/ethernet/smsc/smc911x.c b/drivers/net/ethernet/smsc/smc911x.c index f97b35430c840..ac1ad00e2fc55 100644 --- a/drivers/net/ethernet/smsc/smc911x.c +++ b/drivers/net/ethernet/smsc/smc911x.c @@ -2080,6 +2080,11 @@ static int smc911x_drv_probe(struct platform_device *pdev)
ndev->dma = (unsigned char)-1; ndev->irq = platform_get_irq(pdev, 0); + if (ndev->irq < 0) { + ret = ndev->irq; + goto release_both; + } + lp = netdev_priv(ndev); lp->netdev = ndev; #ifdef SMC_DYNAMIC_BUS_CONFIG
From: Jiasheng Jiang jiasheng@iscas.ac.cn
[ Upstream commit 9b8bdd1eb5890aeeab7391dddcf8bd51f7b07216 ]
Because of the possible failure of the kcalloc, it should be better to set rx_queue->page_ptr_mask to 0 when it happens in order to maintain the consistency.
Fixes: 5a6681e22c14 ("sfc: separate out SFC4000 ("Falcon") support into new sfc-falcon driver") Signed-off-by: Jiasheng Jiang jiasheng@iscas.ac.cn Acked-by: Martin Habets habetsm.xilinx@gmail.com Link: https://lore.kernel.org/r/20211220140344.978408-1-jiasheng@iscas.ac.cn Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/sfc/falcon/rx.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/sfc/falcon/rx.c b/drivers/net/ethernet/sfc/falcon/rx.c index 02456ed13a7d4..5b93a3af4575d 100644 --- a/drivers/net/ethernet/sfc/falcon/rx.c +++ b/drivers/net/ethernet/sfc/falcon/rx.c @@ -732,7 +732,10 @@ static void ef4_init_rx_recycle_ring(struct ef4_nic *efx, efx->rx_bufs_per_page); rx_queue->page_ring = kcalloc(page_ring_size, sizeof(*rx_queue->page_ring), GFP_KERNEL); - rx_queue->page_ptr_mask = page_ring_size - 1; + if (!rx_queue->page_ring) + rx_queue->page_ptr_mask = 0; + else + rx_queue->page_ptr_mask = page_ring_size - 1; }
void ef4_init_rx_queue(struct ef4_rx_queue *rx_queue)
Hi!
From: Jiasheng Jiang jiasheng@iscas.ac.cn
[ Upstream commit 9b8bdd1eb5890aeeab7391dddcf8bd51f7b07216 ]
Because of the possible failure of the kcalloc, it should be better to set rx_queue->page_ptr_mask to 0 when it happens in order to maintain the consistency.
Again this is confusing/wrong, or at least not a complete fix...
+++ b/drivers/net/ethernet/sfc/falcon/rx.c @@ -732,7 +732,10 @@ static void ef4_init_rx_recycle_ring(struct ef4_nic *efx, efx->rx_bufs_per_page); rx_queue->page_ring = kcalloc(page_ring_size, sizeof(*rx_queue->page_ring), GFP_KERNEL);
- rx_queue->page_ptr_mask = page_ring_size - 1;
- if (!rx_queue->page_ring)
rx_queue->page_ptr_mask = 0;
- else
rx_queue->page_ptr_mask = page_ring_size - 1;
}
...as we have
index = rx_queue->page_remove & rx_queue->page_ptr_mask; page = rx_queue->page_ring[index]; in ef4_reuse_page, and similar problems in other places, including
for (i = 0; i <= rx_queue->page_ptr_mask; i++) { struct page *page = rx_queue->page_ring[i];
. Best regards, Pavel
From: Guenter Roeck linux@roeck-us.net
[ Upstream commit fce15c45d3fbd9fc1feaaf3210d8e3f8b33dfd3a ]
The detect function had a comment "Make compiler happy" when id did not read the second configuration register. As it turns out, the code was checking the contents of this register for manufacturer ID 0xA1 (NXP Semiconductor/Philips), but never actually read the register. So it wasn't surprising that the compiler complained, and it indeed had a point. Fix the code to read the register contents for manufacturer ID 0xa1.
At the same time, the code was reading the register for manufacturer ID 0x41 (Analog Devices), but it was not using the results. In effect it was just checking if reading the register returned an error. That doesn't really add much if any value, so stop doing that.
Fixes: f90be42fb383 ("hwmon: (lm90) Refactor reading of config2 register") Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hwmon/lm90.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/hwmon/lm90.c b/drivers/hwmon/lm90.c index c187e557678ef..3df4e8654448b 100644 --- a/drivers/hwmon/lm90.c +++ b/drivers/hwmon/lm90.c @@ -1439,12 +1439,11 @@ static int lm90_detect(struct i2c_client *client, if (man_id < 0 || chip_id < 0 || config1 < 0 || convrate < 0) return -ENODEV;
- if (man_id == 0x01 || man_id == 0x5C || man_id == 0x41) { + if (man_id == 0x01 || man_id == 0x5C || man_id == 0xA1) { config2 = i2c_smbus_read_byte_data(client, LM90_REG_R_CONFIG2); if (config2 < 0) return -ENODEV; - } else - config2 = 0; /* Make compiler happy */ + }
if ((address == 0x4C || address == 0x4D) && man_id == 0x01) { /* National Semiconductor */
From: Xiaoke Wang xkernel.wang@foxmail.com
commit c01c1db1dc632edafb0dff32d40daf4f9c1a4e19 upstream.
kstrdup() can return NULL, it is better to check the return value of it.
Signed-off-by: Xiaoke Wang xkernel.wang@foxmail.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/tencent_094816F3522E0DC704056C789352EBBF0606@qq.co... Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/core/jack.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/sound/core/jack.c +++ b/sound/core/jack.c @@ -234,6 +234,10 @@ int snd_jack_new(struct snd_card *card, return -ENOMEM;
jack->id = kstrdup(id, GFP_KERNEL); + if (jack->id == NULL) { + kfree(jack); + return -ENOMEM; + }
/* don't creat input device for phantom jack */ if (!phantom_jack) {
From: Colin Ian King colin.i.king@gmail.com
commit 2dee54b289fbc810669a1b2b8a0887fa1c9a14d7 upstream.
Static analysis with scan-build has found an assignment to vp2 that is never used. It seems that the check on vp->state > 0 should be actually on vp2->state instead. Fix this.
This dates back to 2002, I found the offending commit from the git history git://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git, commit 91e39521bbf6 ("[PATCH] ALSA patch for 2.5.4")
Signed-off-by: Colin Ian King colin.i.king@gmail.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20211212172025.470367-1-colin.i.king@gmail.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/drivers/opl3/opl3_midi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/sound/drivers/opl3/opl3_midi.c +++ b/sound/drivers/opl3/opl3_midi.c @@ -412,7 +412,7 @@ void snd_opl3_note_on(void *p, int note, } if (instr_4op) { vp2 = &opl3->voices[voice + 3]; - if (vp->state > 0) { + if (vp2->state > 0) { opl3_reg = reg_side | (OPL3_REG_KEYON_BLOCK + voice_offset + 3); reg_val = vp->keyon_reg & ~OPL3_KEYON_BIT;
From: José Expósito jose.exposito89@gmail.com
commit 12f247ab590a08856441efdbd351cf2cc8f60a2d upstream.
The "id_buf" buffer is stored in "data->raw_info_block" and freed by "mxt_free_object_table" in case of error.
Return instead of jumping to avoid a double free.
Addresses-Coverity-ID: 1474582 ("Double free") Fixes: 068bdb67ef74 ("Input: atmel_mxt_ts - fix the firmware update") Signed-off-by: José Expósito jose.exposito89@gmail.com Link: https://lore.kernel.org/r/20211212194257.68879-1-jose.exposito89@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov dmitry.torokhov@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/input/touchscreen/atmel_mxt_ts.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/input/touchscreen/atmel_mxt_ts.c +++ b/drivers/input/touchscreen/atmel_mxt_ts.c @@ -1809,7 +1809,7 @@ static int mxt_read_info_block(struct mx if (error) { dev_err(&client->dev, "Error %d parsing object table\n", error); mxt_free_object_table(data); - goto err_free_mem; + return error; }
data->object_table = (struct mxt_object *)(id_buf + MXT_OBJECT_START);
From: Thadeu Lima de Souza Cascardo cascardo@canonical.com
commit 2b5160b12091285c5aca45980f100a9294af7b04 upstream.
In case, init_srcu_struct fails (because of memory allocation failure), we might proceed with the driver initialization despite srcu_struct not being entirely initialized.
Fixes: 913a89f009d9 ("ipmi: Don't initialize anything in the core until something uses it") Signed-off-by: Thadeu Lima de Souza Cascardo cascardo@canonical.com Cc: Corey Minyard cminyard@mvista.com Cc: stable@vger.kernel.org Message-Id: 20211217154410.1228673-1-cascardo@canonical.com Signed-off-by: Corey Minyard cminyard@mvista.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/char/ipmi/ipmi_msghandler.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/char/ipmi/ipmi_msghandler.c +++ b/drivers/char/ipmi/ipmi_msghandler.c @@ -5085,7 +5085,9 @@ static int ipmi_init_msghandler(void) if (initialized) goto out;
- init_srcu_struct(&ipmi_interfaces_srcu); + rv = init_srcu_struct(&ipmi_interfaces_srcu); + if (rv) + goto out;
timer_setup(&ipmi_timer, ipmi_timeout, 0); mod_timer(&ipmi_timer, jiffies + IPMI_TIMEOUT_JIFFIES);
From: Thadeu Lima de Souza Cascardo cascardo@canonical.com
commit 75d70d76cb7b927cace2cb34265d68ebb3306b13 upstream.
If the workqueue allocation fails, the driver is marked as not initialized, and timer and panic_notifier will be left registered.
Instead of removing those when workqueue allocation fails, do the workqueue initialization before doing it, and cleanup srcu_struct if it fails.
Fixes: 1d49eb91e86e ("ipmi: Move remove_work to dedicated workqueue") Signed-off-by: Thadeu Lima de Souza Cascardo cascardo@canonical.com Cc: Corey Minyard cminyard@mvista.com Cc: Ioanna Alifieraki ioanna-maria.alifieraki@canonical.com Cc: stable@vger.kernel.org Message-Id: 20211217154410.1228673-2-cascardo@canonical.com Signed-off-by: Corey Minyard cminyard@mvista.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/char/ipmi/ipmi_msghandler.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)
--- a/drivers/char/ipmi/ipmi_msghandler.c +++ b/drivers/char/ipmi/ipmi_msghandler.c @@ -5089,20 +5089,23 @@ static int ipmi_init_msghandler(void) if (rv) goto out;
- timer_setup(&ipmi_timer, ipmi_timeout, 0); - mod_timer(&ipmi_timer, jiffies + IPMI_TIMEOUT_JIFFIES); - - atomic_notifier_chain_register(&panic_notifier_list, &panic_block); - remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq"); if (!remove_work_wq) { pr_err("unable to create ipmi-msghandler-remove-wq workqueue"); rv = -ENOMEM; - goto out; + goto out_wq; }
+ timer_setup(&ipmi_timer, ipmi_timeout, 0); + mod_timer(&ipmi_timer, jiffies + IPMI_TIMEOUT_JIFFIES); + + atomic_notifier_chain_register(&panic_notifier_list, &panic_block); + initialized = true;
+out_wq: + if (rv) + cleanup_srcu_struct(&ipmi_interfaces_srcu); out: mutex_unlock(&ipmi_interfaces_mutex); return rv;
From: John David Anglin dave.anglin@bell.net
commit 8f66fce0f46560b9e910787ff7ad0974441c4f9c upstream.
The completer in the "or,ev %r1,%r30,%r30" instruction is reversed, so we are not clipping the LWS number when we are called from a 32-bit process (W=0). We need to nulify the following depdi instruction when the least-significant bit of %r30 is 1.
If the %r20 register is not clipped, a user process could perform a LWS call that would branch to an undefined location in the kernel and potentially crash the machine.
Signed-off-by: John David Anglin dave.anglin@bell.net Cc: stable@vger.kernel.org # 4.19+ Signed-off-by: Helge Deller deller@gmx.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/parisc/kernel/syscall.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/parisc/kernel/syscall.S +++ b/arch/parisc/kernel/syscall.S @@ -478,7 +478,7 @@ lws_start: extrd,u %r1,PSW_W_BIT,1,%r1 /* sp must be aligned on 4, so deposit the W bit setting into * the bottom of sp temporarily */ - or,ev %r1,%r30,%r30 + or,od %r1,%r30,%r30
/* Clip LWS number to a 32-bit value for 32-bit processes */ depdi 0, 31, 32, %r20
From: Andrew Cooper andrew.cooper3@citrix.com
commit 57690554abe135fee81d6ac33cc94d75a7e224bb upstream.
Both __pkru_allows_write() and arch_set_user_pkey_access() shift PKRU_WD_BIT (a signed constant) by up to 30 bits, hitting the sign bit.
Use unsigned constants instead.
Clearly pkey 15 has not been used in combination with UBSAN yet.
Noticed by code inspection only. I can't actually provoke the compiler into generating incorrect logic as far as this shift is concerned.
[ dhansen: add stable@ tag, plus minor changelog massaging,
For anyone doing backports, these #defines were in arch/x86/include/asm/pgtable.h before 784a46618f6. ]
Fixes: 33a709b25a76 ("mm/gup, x86/mm/pkeys: Check VMAs and PTEs for protection keys") Signed-off-by: Andrew Cooper andrew.cooper3@citrix.com Signed-off-by: Dave Hansen dave.hansen@linux.intel.com Signed-off-by: Borislav Petkov bp@suse.de Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20211216000856.4480-1-andrew.cooper3@citrix.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/include/asm/pgtable.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1356,8 +1356,8 @@ static inline pmd_t pmd_swp_clear_soft_d #endif #endif
-#define PKRU_AD_BIT 0x1 -#define PKRU_WD_BIT 0x2 +#define PKRU_AD_BIT 0x1u +#define PKRU_WD_BIT 0x2u #define PKRU_BITS_PER_PKEY 2
static inline bool __pkru_allows_read(u32 pkru, u16 pkey)
From: Fabien Dessenne fabien.dessenne@foss.st.com
commit b67210cc217f9ca1c576909454d846970c13dfd4 upstream.
Consider the GPIO controller offset (from "gpio-ranges") to compute the maximum GPIO line number. This fixes an issue where gpio-ranges uses a non-null offset. e.g.: gpio-ranges = <&pinctrl 6 86 10> In that case the last valid GPIO line is not 9 but 15 (6 + 10 - 1)
Cc: stable@vger.kernel.org Fixes: 67e2996f72c7 ("pinctrl: stm32: fix the reported number of GPIO lines per bank") Reported-by: Christoph Fritz chf.fritz@googlemail.com Signed-off-by: Fabien Dessenne fabien.dessenne@foss.st.com Link: https://lore.kernel.org/r/20211215095808.621716-1-fabien.dessenne@foss.st.co... Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/pinctrl/stm32/pinctrl-stm32.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/drivers/pinctrl/stm32/pinctrl-stm32.c +++ b/drivers/pinctrl/stm32/pinctrl-stm32.c @@ -1011,10 +1011,10 @@ static int stm32_gpiolib_register_bank(s bank_nr = args.args[1] / STM32_GPIO_PINS_PER_BANK; bank->gpio_chip.base = args.args[1];
- npins = args.args[2]; - while (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, - ++i, &args)) - npins += args.args[2]; + /* get the last defined gpio line (offset + nb of pins) */ + npins = args.args[0] + args.args[2]; + while (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, ++i, &args)) + npins = max(npins, (int)(args.args[0] + args.args[2])); } else { bank_nr = pctl->nbanks; bank->gpio_chip.base = bank_nr * STM32_GPIO_PINS_PER_BANK;
From: Ard Biesheuvel ardb@kernel.org
commit 8536a5ef886005bc443c2da9b842d69fd3d7647f upstream.
The Thumb2 version of the FP exception handling entry code treats the register holding the CP number (R8) differently, resulting in the iWMMXT CP number check to be incorrect.
Fix this by unifying the ARM and Thumb2 code paths, and switch the order of the additions of the TI_USED_CP offset and the shifted CP index.
Cc: stable@vger.kernel.org Fixes: b86040a59feb ("Thumb-2: Implementation of the unified start-up and exceptions code") Signed-off-by: Ard Biesheuvel ardb@kernel.org Signed-off-by: Russell King (Oracle) rmk+kernel@armlinux.org.uk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm/kernel/entry-armv.S | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-)
--- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -620,11 +620,9 @@ call_fpe: tstne r0, #0x04000000 @ bit 26 set on both ARM and Thumb-2 reteq lr and r8, r0, #0x00000f00 @ mask out CP number - THUMB( lsr r8, r8, #8 ) mov r7, #1 - add r6, r10, #TI_USED_CP - ARM( strb r7, [r6, r8, lsr #8] ) @ set appropriate used_cp[] - THUMB( strb r7, [r6, r8] ) @ set appropriate used_cp[] + add r6, r10, r8, lsr #8 @ add used_cp[] array offset first + strb r7, [r6, #TI_USED_CP] @ set appropriate used_cp[] #ifdef CONFIG_IWMMXT @ Test if we need to give access to iWMMXt coprocessors ldr r5, [r10, #TI_FLAGS] @@ -633,7 +631,7 @@ call_fpe: bcs iwmmxt_task_enable #endif ARM( add pc, pc, r8, lsr #6 ) - THUMB( lsl r8, r8, #2 ) + THUMB( lsr r8, r8, #6 ) THUMB( add pc, r8 ) nop
From: Chao Yu chao@kernel.org
commit 5598b24efaf4892741c798b425d543e4bed357a1 upstream.
As Wenqing Liu reported in bugzilla:
https://bugzilla.kernel.org/show_bug.cgi?id=215235
- Overview page fault in f2fs_setxattr() when mount and operate on corrupted image
- Reproduce tested on kernel 5.16-rc3, 5.15.X under root
1. unzip tmp7.zip 2. ./single.sh f2fs 7
Sometimes need to run the script several times
- Kernel dump loop0: detected capacity change from 0 to 131072 F2FS-fs (loop0): Found nat_bits in checkpoint F2FS-fs (loop0): Mounted with checkpoint version = 7548c2ee BUG: unable to handle page fault for address: ffffe47bc7123f48 RIP: 0010:kfree+0x66/0x320 Call Trace: __f2fs_setxattr+0x2aa/0xc00 [f2fs] f2fs_setxattr+0xfa/0x480 [f2fs] __f2fs_set_acl+0x19b/0x330 [f2fs] __vfs_removexattr+0x52/0x70 __vfs_removexattr_locked+0xb1/0x140 vfs_removexattr+0x56/0x100 removexattr+0x57/0x80 path_removexattr+0xa3/0xc0 __x64_sys_removexattr+0x17/0x20 do_syscall_64+0x37/0xb0 entry_SYSCALL_64_after_hwframe+0x44/0xae
The root cause is in __f2fs_setxattr(), we missed to do sanity check on last xattr entry, result in out-of-bound memory access during updating inconsistent xattr data of target inode.
After the fix, it can detect such xattr inconsistency as below:
F2FS-fs (loop11): inode (7) has invalid last xattr entry, entry_size: 60676 F2FS-fs (loop11): inode (8) has corrupted xattr F2FS-fs (loop11): inode (8) has corrupted xattr F2FS-fs (loop11): inode (8) has invalid last xattr entry, entry_size: 47736
Cc: stable@vger.kernel.org Reported-by: Wenqing Liu wenqingliu0120@gmail.com Signed-off-by: Chao Yu chao@kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org [delete f2fs_err() call as it's not in older kernels - gregkh] Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/f2fs/xattr.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
--- a/fs/f2fs/xattr.c +++ b/fs/f2fs/xattr.c @@ -658,8 +658,15 @@ static int __f2fs_setxattr(struct inode }
last = here; - while (!IS_XATTR_LAST_ENTRY(last)) + while (!IS_XATTR_LAST_ENTRY(last)) { + if ((void *)(last) + sizeof(__u32) > last_base_addr || + (void *)XATTR_NEXT_ENTRY(last) > last_base_addr) { + set_sbi_flag(F2FS_I_SB(inode), SBI_NEED_FSCK); + error = -EFSCORRUPTED; + goto exit; + } last = XATTR_NEXT_ENTRY(last); + }
newsize = XATTR_ALIGN(sizeof(struct f2fs_xattr_entry) + len + size);
Hi!
From: Chao Yu chao@kernel.org
commit 5598b24efaf4892741c798b425d543e4bed357a1 upstream.
Not sure what went wrong here, but as far as I can tell, this patch is _not_ yet in upstream.
git log:
commit eec4df26e24e978e49ccf9bcf49ca0f2ccdaeffe Merge: e7c124bd0463 4eb1782eaa9f Author: Linus Torvalds torvalds@linux-foundation.org Date: Wed Dec 29 10:07:20 2021 -0800
pavel@amd:/data/l/clean-cg$ git show 5598b24efaf4892741c798b425d543e4bed357a1 fatal: bad object 5598b24efaf4892741c798b425d543e4bed357a1
Best regards, Pavel
From: Marian Postevca posteuca@mutex.one
commit 890d5b40908bfd1a79be018d2d297cf9df60f4ee upstream.
When listening for notifications through netlink of a new interface being registered, sporadically, it is possible for the MAC to be read as zero. The zero MAC address lasts a short period of time and then switches to a valid random MAC address.
This causes problems for netd in Android, which assumes that the interface is malfunctioning and will not use it.
In the good case we get this log: InterfaceController::getCfg() ifName usb0 hwAddr 92:a8:f0:73:79:5b ipv4Addr 0.0.0.0 flags 0x1002
In the error case we get these logs: InterfaceController::getCfg() ifName usb0 hwAddr 00:00:00:00:00:00 ipv4Addr 0.0.0.0 flags 0x1002
netd : interfaceGetCfg("usb0") netd : interfaceSetCfg() -> ServiceSpecificException (99, "[Cannot assign requested address] : ioctl() failed")
The reason for the issue is the order in which the interface is setup, it is first registered through register_netdev() and after the MAC address is set.
Fixed by first setting the MAC address of the net_device and after that calling register_netdev().
Fixes: bcd4a1c40bee885e ("usb: gadget: u_ether: construct with default values and add setters/getters") Cc: stable@vger.kernel.org Signed-off-by: Marian Postevca posteuca@mutex.one Link: https://lore.kernel.org/r/20211204214912.17627-1-posteuca@mutex.one Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/gadget/function/u_ether.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-)
--- a/drivers/usb/gadget/function/u_ether.c +++ b/drivers/usb/gadget/function/u_ether.c @@ -860,19 +860,23 @@ int gether_register_netdev(struct net_de { struct eth_dev *dev; struct usb_gadget *g; - struct sockaddr sa; int status;
if (!net->dev.parent) return -EINVAL; dev = netdev_priv(net); g = dev->gadget; + + memcpy(net->dev_addr, dev->dev_mac, ETH_ALEN); + net->addr_assign_type = NET_ADDR_RANDOM; + status = register_netdev(net); if (status < 0) { dev_dbg(&g->dev, "register_netdev failed, %d\n", status); return status; } else { INFO(dev, "HOST MAC %pM\n", dev->host_mac); + INFO(dev, "MAC %pM\n", dev->dev_mac);
/* two kinds of host-initiated state changes: * - iff DATA transfer is active, carrier is "on" @@ -880,15 +884,6 @@ int gether_register_netdev(struct net_de */ netif_carrier_off(net); } - sa.sa_family = net->type; - memcpy(sa.sa_data, dev->dev_mac, ETH_ALEN); - rtnl_lock(); - status = dev_set_mac_address(net, &sa); - rtnl_unlock(); - if (status) - pr_warn("cannot set self ethernet address: %d\n", status); - else - INFO(dev, "MAC %pM\n", dev->dev_mac);
return status; }
From: Sean Christopherson seanjc@google.com
commit 0ff29701ffad9a5d5a24344d8b09f3af7b96ffda upstream.
Update the documentation for kvm-intel's emulate_invalid_guest_state to rectify the description of KVM's default behavior, and to document that the behavior and thus parameter only applies to L1.
Fixes: a27685c33acc ("KVM: VMX: Emulate invalid guest state by default") Signed-off-by: Sean Christopherson seanjc@google.com Message-Id: 20211207193006.120997-4-seanjc@google.com Reviewed-by: Maxim Levitsky mlevitsk@redhat.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Documentation/admin-guide/kernel-parameters.txt | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
--- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2019,8 +2019,12 @@ Default is 1 (enabled)
kvm-intel.emulate_invalid_guest_state= - [KVM,Intel] Enable emulation of invalid guest states - Default is 0 (disabled) + [KVM,Intel] Disable emulation of invalid guest state. + Ignored if kvm-intel.enable_unrestricted_guest=1, as + guest state is never invalid for unrestricted guests. + This param doesn't apply to nested guests (L2), as KVM + never emulates invalid L2 guest state. + Default is 1 (enabled)
kvm-intel.flexpriority= [KVM,Intel] Disable FlexPriority feature (TPR shadow).
From: Samuel Čavoj samuel@cavoj.net
commit 44ee250aeeabb28b52a10397ac17ffb8bfe94839 upstream.
The ASUS UM325UA suffers from the same issue as the ASUS UX425UA, which is a very similar laptop. The i8042 device is not usable immediately after boot and fails to initialize, requiring a deferred retry.
Enable the deferred probe quirk for the UM325UA.
BugLink: https://bugzilla.suse.com/show_bug.cgi?id=1190256 Signed-off-by: Samuel Čavoj samuel@cavoj.net Link: https://lore.kernel.org/r/20211204015615.232948-1-samuel@cavoj.net Signed-off-by: Dmitry Torokhov dmitry.torokhov@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/input/serio/i8042-x86ia64io.h | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/drivers/input/serio/i8042-x86ia64io.h +++ b/drivers/input/serio/i8042-x86ia64io.h @@ -996,6 +996,13 @@ static const struct dmi_system_id __init DMI_MATCH(DMI_PRODUCT_NAME, "C504"), }, }, + { + /* ASUS ZenBook UM325UA */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), + DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"), + }, + }, { } };
From: Guenter Roeck linux@roeck-us.net
commit cdc5287acad9ede121924a9c9313544b80d15842 upstream.
Bit 7 of the status register indicates that the chip is busy doing a conversion. It does not indicate an alarm status. Stop reporting it as alarm status bit.
Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hwmon/lm90.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/hwmon/lm90.c +++ b/drivers/hwmon/lm90.c @@ -197,6 +197,7 @@ enum chips { lm90, adm1032, lm99, lm86, #define LM90_STATUS_RHIGH (1 << 4) /* remote high temp limit tripped */ #define LM90_STATUS_LLOW (1 << 5) /* local low temp limit tripped */ #define LM90_STATUS_LHIGH (1 << 6) /* local high temp limit tripped */ +#define LM90_STATUS_BUSY (1 << 7) /* conversion is ongoing */
#define MAX6696_STATUS2_R2THRM (1 << 1) /* remote2 THERM limit tripped */ #define MAX6696_STATUS2_R2OPEN (1 << 2) /* remote2 is an open circuit */ @@ -786,7 +787,7 @@ static int lm90_update_device(struct dev val = lm90_read_reg(client, LM90_REG_R_STATUS); if (val < 0) return val; - data->alarms = val; /* lower 8 bit of alarms */ + data->alarms = val & ~LM90_STATUS_BUSY;
if (data->kind == max6696) { val = lm90_select_remote_channel(client, data, 1);
From: Lin Ma linma@zju.edu.cn
commit 1ade48d0c27d5da1ccf4b583d8c5fc8b534a3ac8 upstream.
The existing cleanup routine implementation is not well synchronized with the syscall routine. When a device is detaching, below race could occur.
static int ax25_sendmsg(...) { ... lock_sock() ax25 = sk_to_ax25(sk); if (ax25->ax25_dev == NULL) // CHECK ... ax25_queue_xmit(skb, ax25->ax25_dev->dev); // USE ... }
static void ax25_kill_by_device(...) { ... if (s->ax25_dev == ax25_dev) { s->ax25_dev = NULL; ... }
Other syscall functions like ax25_getsockopt, ax25_getname, ax25_info_show also suffer from similar races. To fix them, this patch introduce lock_sock() into ax25_kill_by_device in order to guarantee that the nullify action in cleanup routine cannot proceed when another socket request is pending.
Signed-off-by: Hanjie Wu nagi@zju.edu.cn Signed-off-by: Lin Ma linma@zju.edu.cn Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ax25/af_ax25.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/net/ax25/af_ax25.c +++ b/net/ax25/af_ax25.c @@ -88,8 +88,10 @@ static void ax25_kill_by_device(struct n again: ax25_for_each(s, &ax25_list) { if (s->ax25_dev == ax25_dev) { - s->ax25_dev = NULL; spin_unlock_bh(&ax25_list_lock); + lock_sock(s->sk); + s->ax25_dev = NULL; + release_sock(s->sk); ax25_disconnect(s, ENETUNREACH); spin_lock_bh(&ax25_list_lock);
From: Lin Ma linma@zju.edu.cn
commit 3e0588c291d6ce225f2b891753ca41d45ba42469 upstream.
There is a possible race condition (use-after-free) like below
(USE) | (FREE) ax25_sendmsg | ax25_queue_xmit | dev_queue_xmit | __dev_queue_xmit | __dev_xmit_skb | sch_direct_xmit | ... xmit_one | netdev_start_xmit | tty_ldisc_kill __netdev_start_xmit | mkiss_close ax_xmit | kfree ax_encaps | |
Even though there are two synchronization primitives before the kfree: 1. wait_for_completion(&ax->dead). This can prevent the race with routines from mkiss_ioctl. However, it cannot stop the routine coming from upper layer, i.e., the ax25_sendmsg.
2. netif_stop_queue(ax->dev). It seems that this line of code aims to halt the transmit queue but it fails to stop the routine that already being xmit.
This patch reorder the kfree after the unregister_netdev to avoid the possible UAF as the unregister_netdev() is well synchronized and won't return if there is a running routine.
Signed-off-by: Lin Ma linma@zju.edu.cn Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/hamradio/mkiss.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-)
--- a/drivers/net/hamradio/mkiss.c +++ b/drivers/net/hamradio/mkiss.c @@ -803,13 +803,14 @@ static void mkiss_close(struct tty_struc */ netif_stop_queue(ax->dev);
- /* Free all AX25 frame buffers. */ - kfree(ax->rbuff); - kfree(ax->xbuff); - ax->tty = NULL;
unregister_netdev(ax->dev); + + /* Free all AX25 frame buffers after unreg. */ + kfree(ax->rbuff); + kfree(ax->xbuff); + free_netdev(ax->dev); }
From: Lin Ma linma@zju.edu.cn
commit b2f37aead1b82a770c48b5d583f35ec22aabb61e upstream.
The previous commit 3e0588c291d6 ("hamradio: defer ax25 kfree after unregister_netdev") reorder the kfree operations and unregister_netdev operation to prevent UAF.
This commit improves the previous one by also deferring the nullify of the ax->tty pointer. Otherwise, a NULL pointer dereference bug occurs. Partial of the stack trace is shown below.
BUG: kernel NULL pointer dereference, address: 0000000000000538 RIP: 0010:ax_xmit+0x1f9/0x400 ... Call Trace: dev_hard_start_xmit+0xec/0x320 sch_direct_xmit+0xea/0x240 __qdisc_run+0x166/0x5c0 __dev_queue_xmit+0x2c7/0xaf0 ax25_std_establish_data_link+0x59/0x60 ax25_connect+0x3a0/0x500 ? security_socket_connect+0x2b/0x40 __sys_connect+0x96/0xc0 ? __hrtimer_init+0xc0/0xc0 ? common_nsleep+0x2e/0x50 ? switch_fpu_return+0x139/0x1a0 __x64_sys_connect+0x11/0x20 do_syscall_64+0x33/0x40 entry_SYSCALL_64_after_hwframe+0x44/0xa9
The crash point is shown as below
static void ax_encaps(...) { ... set_bit(TTY_DO_WRITE_WAKEUP, &ax->tty->flags); // ax->tty = NULL! ... }
By placing the nullify action after the unregister_netdev, the ax->tty pointer won't be assigned as NULL net_device framework layer is well synchronized.
Signed-off-by: Lin Ma linma@zju.edu.cn Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/hamradio/mkiss.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/net/hamradio/mkiss.c +++ b/drivers/net/hamradio/mkiss.c @@ -803,14 +803,14 @@ static void mkiss_close(struct tty_struc */ netif_stop_queue(ax->dev);
- ax->tty = NULL; - unregister_netdev(ax->dev);
/* Free all AX25 frame buffers after unreg. */ kfree(ax->rbuff); kfree(ax->xbuff);
+ ax->tty = NULL; + free_netdev(ax->dev); }
From: Rémi Denis-Courmont remi@remlab.net
commit 75a2f31520095600f650597c0ac41f48b5ba0068 upstream.
This ioctl() implicitly assumed that the socket was already bound to a valid local socket name, i.e. Phonet object. If the socket was not bound, two separate problems would occur:
1) We'd send an pipe enablement request with an invalid source object. 2) Later socket calls could BUG on the socket unexpectedly being connected yet not bound to a valid object.
Reported-by: syzbot+2dc91e7fc3dea88b1e8a@syzkaller.appspotmail.com Signed-off-by: Rémi Denis-Courmont remi@remlab.net Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/phonet/pep.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/net/phonet/pep.c +++ b/net/phonet/pep.c @@ -959,6 +959,8 @@ static int pep_ioctl(struct sock *sk, in ret = -EBUSY; else if (sk->sk_state == TCP_ESTABLISHED) ret = -EISCONN; + else if (!pn->pn_sk.sobject) + ret = -EADDRNOTAVAIL; else ret = pep_sock_enable(sk, NULL, 0); release_sock(sk);
Hi!
This is the start of the stable review cycle for the 4.19.223 release. There are 38 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
CIP testing did not find any problems here:
https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/tree/linux-4...
Tested-by: Pavel Machek (CIP) pavel@denx.de
Best regards, Pavel
On 2021/12/27 23:30, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.19.223 release. There are 38 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 29 Dec 2021 15:13:09 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.19.223-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.19.y and the diffstat can be found below.
thanks,
greg k-h
Tested on arm64 and x86 for 4.19.223-rc1,
Kernel repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git Branch: linux-4.19.y Version: 4.19.223-rc1 Commit: 788fd8cb07c5bb4ad111f5c437f0ebc12eac9ba1 Compiler: gcc version 7.3.0 (GCC)
arm64: -------------------------------------------------------------------- Testcase Result Summary: total: 8942 passed: 8942 failed: 0 timeout: 0 --------------------------------------------------------------------
x86: -------------------------------------------------------------------- Testcase Result Summary: total: 8942 passed: 8942 failed: 0 timeout: 0 --------------------------------------------------------------------
Tested-by: Hulk Robot hulkrobot@huawei.com
On Mon, 27 Dec 2021 at 21:03, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 4.19.223 release. There are 38 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 29 Dec 2021 15:13:09 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.19.223-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.19.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
## Build * kernel: 4.19.223-rc1 * git: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git * git branch: linux-4.19.y * git commit: c3b6f5a58bb324123904facb3806b3bcc00bdccb * git describe: v4.19.222-39-gc3b6f5a58bb3 * test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-4.19.y/build/v4.19....
## No Test Regressions (compared to v4.19.222)
## No Test Fixes (compared to v4.19.222)
## Test result summary total: 84075, pass: 68731, fail: 620, skip: 12910, xfail: 1814
## Build Summary * arm: 254 total, 246 passed, 8 failed * arm64: 35 total, 35 passed, 0 failed * dragonboard-410c: 1 total, 1 passed, 0 failed * hi6220-hikey: 1 total, 1 passed, 0 failed * i386: 19 total, 19 passed, 0 failed * juno-r2: 1 total, 1 passed, 0 failed * mips: 26 total, 26 passed, 0 failed * powerpc: 52 total, 48 passed, 4 failed * s390: 12 total, 12 passed, 0 failed * sparc: 12 total, 12 passed, 0 failed * x15: 1 total, 1 passed, 0 failed * x86: 1 total, 1 passed, 0 failed * x86_64: 34 total, 34 passed, 0 failed
## Test suites summary * fwts * kselftest-android * kselftest-arm64 * kselftest-arm64/arm64.btitest.bti_c_func * kselftest-arm64/arm64.btitest.bti_j_func * kselftest-arm64/arm64.btitest.bti_jc_func * kselftest-arm64/arm64.btitest.bti_none_func * kselftest-arm64/arm64.btitest.nohint_func * kselftest-arm64/arm64.btitest.paciasp_func * kselftest-arm64/arm64.nobtitest.bti_c_func * kselftest-arm64/arm64.nobtitest.bti_j_func * kselftest-arm64/arm64.nobtitest.bti_jc_func * kselftest-arm64/arm64.nobtitest.bti_none_func * kselftest-arm64/arm64.nobtitest.nohint_func * kselftest-arm64/arm64.nobtitest.paciasp_func * kselftest-bpf * kselftest-breakpoints * kselftest-capabilities * kselftest-cgroup * kselftest-clone3 * kselftest-core * kselftest-cpu-hotplug * kselftest-cpufreq * kselftest-drivers * kselftest-efivarfs * kselftest-filesystems * kselftest-firmware * kselftest-fpu * kselftest-futex * kselftest-gpio * kselftest-intel_pstate * kselftest-ipc * kselftest-ir * kselftest-kcmp * kselftest-kexec * kselftest-kvm * kselftest-lib * kselftest-livepatch * kselftest-membarrier * kselftest-memfd * kselftest-memory-hotplug * kselftest-mincore * kselftest-mount * kselftest-mqueue * kselftest-net * kselftest-netfilter * kselftest-nsfs * kselftest-openat2 * kselftest-pid_namespace * kselftest-pidfd * kselftest-proc * kselftest-pstore * kselftest-ptrace * kselftest-rseq * kselftest-rtc * kselftest-seccomp * kselftest-sigaltstack * kselftest-size * kselftest-splice * kselftest-static_keys * kselftest-sync * kselftest-sysctl * kselftest-timens * kselftest-timers * kselftest-tmpfs * kselftest-tpm2 * kselftest-user * kselftest-vm * kselftest-x86 * kselftest-zram * kvm-unit-tests * libhugetlbfs * linux-log-parser * ltp-cap_bounds-tests * ltp-commands-tests * ltp-containers-tests * ltp-controllers-tests * ltp-cpuhotplug-tests * ltp-crypto-tests * ltp-cve-tests * ltp-dio-tests * ltp-fcntl-locktests-tests * ltp-filecaps-tests * ltp-fs-tests * ltp-fs_bind-tests * ltp-fs_perms_simple-tests * ltp-fsx-tests * ltp-hugetlb-tests * ltp-io-tests * ltp-ipc-tests * ltp-math-tests * ltp-mm-tests * ltp-nptl-tests * ltp-open-posix-tests * ltp-pty-tests * ltp-sched-tests * ltp-securebits-tests * ltp-syscalls-tests * ltp-tracing-tests * network-basic-tests * packetdrill * perf * rcutorture * ssuite * v4l2-compliance
-- Linaro LKFT https://lkft.linaro.org
Hi Greg,
On Mon, Dec 27, 2021 at 04:30:37PM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.19.223 release. There are 38 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 29 Dec 2021 15:13:09 +0000. Anything received after that time might be too late.
Build test: mips (gcc version 11.2.1 20211214): 63 configs -> no failure arm (gcc version 11.2.1 20211214): 116 configs -> no new failure arm64 (gcc version 11.2.1 20211214): 2 configs -> no failure x86_64 (gcc version 11.2.1 20211214): 4 configs -> no failure
Boot test: x86_64: Booted on my test laptop. No regression. x86_64: Booted on qemu. No regression. [1]
[1]. https://openqa.qa.codethink.co.uk/tests/554
Tested-by: Sudip Mukherjee sudip.mukherjee@codethink.co.uk
-- Regards Sudip
On Mon, Dec 27, 2021 at 04:30:37PM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.19.223 release. There are 38 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 29 Dec 2021 15:13:09 +0000. Anything received after that time might be too late.
Build results: total: 155 pass: 155 fail: 0 Qemu test results: total: 422 pass: 422 fail: 0
Tested-by: Guenter Roeck linux@roeck-us.net
Guenter
On 12/27/21 8:30 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.19.223 release. There are 38 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 29 Dec 2021 15:13:09 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.19.223-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.19.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
Tested-by: Shuah Khan skhan@linuxfoundation.org
thanks, -- Shuah
linux-stable-mirror@lists.linaro.org