As there is very little ordering in the KVM API, userspace can instanciate a half-baked GIC (missing its memory map, for example) at almost any time.
This means that, with the right timing, a thread running vcpu-0 can enter the kernel without a GIC configured and get a GIC created behind its back by another thread. Amusingly, it will pick up that GIC and start messing with the data structures without the GIC having been fully initialised.
Similarly, a thread running vcpu-1 can enter the kernel, and try to init the GIC that was previously created. Since this GIC isn't properly configured (no memory map), it fails to correctly initialise.
And that's the point where we decide to teardown the GIC, freeing all its resources. Behind vcpu-0's back. Things stop pretty abruptly, with a variety of symptoms. Clearly, this isn't good, we should be a bit more careful about this.
It is obvious that this guest is not viable, as it is missing some important part of its configuration. So instead of trying to tear bits of it down, let's just mark it as *dead*. It means that any further interaction from userspace will result in -EIO. The memory will be released on the "normal" path, when userspace gives up.
Cc: stable@vger.kernel.org Reported-by: Alexander Potapenko glider@google.com Signed-off-by: Marc Zyngier maz@kernel.org --- arch/arm64/kvm/arm.c | 3 +++ arch/arm64/kvm/vgic/vgic-init.c | 6 +++--- 2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a0d01c46e4084..b97ada19f06a7 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -997,6 +997,9 @@ static int kvm_vcpu_suspend(struct kvm_vcpu *vcpu) static int check_vcpu_requests(struct kvm_vcpu *vcpu) { if (kvm_request_pending(vcpu)) { + if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu)) + return -EIO; + if (kvm_check_request(KVM_REQ_SLEEP, vcpu)) kvm_vcpu_sleep(vcpu);
diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c index e7c53e8af3d16..c4cbf798e71a4 100644 --- a/arch/arm64/kvm/vgic/vgic-init.c +++ b/arch/arm64/kvm/vgic/vgic-init.c @@ -536,10 +536,10 @@ int kvm_vgic_map_resources(struct kvm *kvm) out: mutex_unlock(&kvm->arch.config_lock); out_slots: - mutex_unlock(&kvm->slots_lock); - if (ret) - kvm_vgic_destroy(kvm); + kvm_vm_dead(kvm); + + mutex_unlock(&kvm->slots_lock);
return ret; }
On Wed, Oct 09, 2024 at 07:36:03PM +0100, Marc Zyngier wrote:
As there is very little ordering in the KVM API, userspace can instanciate a half-baked GIC (missing its memory map, for example) at almost any time.
This means that, with the right timing, a thread running vcpu-0 can enter the kernel without a GIC configured and get a GIC created behind its back by another thread. Amusingly, it will pick up that GIC and start messing with the data structures without the GIC having been fully initialised.
Huh, I'm definitely missing something. Could you remind me where we open up this race between KVM_RUN && kvm_vgic_create()?
I'd thought the fact that the latter takes all the vCPU mutexes and checks if any vCPU in the VM has run would be enough to guard against such a race, but clearly not...
Similarly, a thread running vcpu-1 can enter the kernel, and try to init the GIC that was previously created. Since this GIC isn't properly configured (no memory map), it fails to correctly initialise.
And that's the point where we decide to teardown the GIC, freeing all its resources. Behind vcpu-0's back. Things stop pretty abruptly, with a variety of symptoms. Clearly, this isn't good, we should be a bit more careful about this.
It is obvious that this guest is not viable, as it is missing some important part of its configuration. So instead of trying to tear bits of it down, let's just mark it as *dead*. It means that any further interaction from userspace will result in -EIO. The memory will be released on the "normal" path, when userspace gives up.
Cc: stable@vger.kernel.org Reported-by: Alexander Potapenko glider@google.com Signed-off-by: Marc Zyngier maz@kernel.org
Anyway, regarless of *how* we got here, it is pretty clear that tearing things down on the error path is a bad idea. So:
Reviewed-by: Oliver Upton oliver.upton@linux.dev
arch/arm64/kvm/arm.c | 3 +++ arch/arm64/kvm/vgic/vgic-init.c | 6 +++--- 2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a0d01c46e4084..b97ada19f06a7 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -997,6 +997,9 @@ static int kvm_vcpu_suspend(struct kvm_vcpu *vcpu) static int check_vcpu_requests(struct kvm_vcpu *vcpu) { if (kvm_request_pending(vcpu)) {
if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu))
return -EIO;
- if (kvm_check_request(KVM_REQ_SLEEP, vcpu)) kvm_vcpu_sleep(vcpu);
diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c index e7c53e8af3d16..c4cbf798e71a4 100644 --- a/arch/arm64/kvm/vgic/vgic-init.c +++ b/arch/arm64/kvm/vgic/vgic-init.c @@ -536,10 +536,10 @@ int kvm_vgic_map_resources(struct kvm *kvm) out: mutex_unlock(&kvm->arch.config_lock); out_slots:
- mutex_unlock(&kvm->slots_lock);
- if (ret)
kvm_vgic_destroy(kvm);
kvm_vm_dead(kvm);
- mutex_unlock(&kvm->slots_lock);
return ret; } -- 2.39.2
On Wed, Oct 09, 2024, Oliver Upton wrote:
On Wed, Oct 09, 2024 at 07:36:03PM +0100, Marc Zyngier wrote:
As there is very little ordering in the KVM API, userspace can instanciate a half-baked GIC (missing its memory map, for example) at almost any time.
This means that, with the right timing, a thread running vcpu-0 can enter the kernel without a GIC configured and get a GIC created behind its back by another thread. Amusingly, it will pick up that GIC and start messing with the data structures without the GIC having been fully initialised.
Huh, I'm definitely missing something. Could you remind me where we open up this race between KVM_RUN && kvm_vgic_create()?
I'd thought the fact that the latter takes all the vCPU mutexes and checks if any vCPU in the VM has run would be enough to guard against such a race, but clearly not...
Any chance that fixing bugs where vCPU0 can be accessed (and run!) before its fully online help? E.g. if that closes the vCPU0 hole, maybe the vCPU1 case can be handled a bit more gracefully?
[*] https://lore.kernel.org/all/20241009150455.1057573-1-seanjc@google.com
Similarly, a thread running vcpu-1 can enter the kernel, and try to init the GIC that was previously created. Since this GIC isn't properly configured (no memory map), it fails to correctly initialise.
And that's the point where we decide to teardown the GIC, freeing all its resources. Behind vcpu-0's back. Things stop pretty abruptly, with a variety of symptoms. Clearly, this isn't good, we should be a bit more careful about this.
It is obvious that this guest is not viable, as it is missing some important part of its configuration. So instead of trying to tear bits of it down, let's just mark it as *dead*. It means that any further interaction from userspace will result in -EIO. The memory will be released on the "normal" path, when userspace gives up.
On Wed, Oct 09, 2024 at 12:36:32PM -0700, Sean Christopherson wrote:
On Wed, Oct 09, 2024, Oliver Upton wrote:
On Wed, Oct 09, 2024 at 07:36:03PM +0100, Marc Zyngier wrote:
As there is very little ordering in the KVM API, userspace can instanciate a half-baked GIC (missing its memory map, for example) at almost any time.
This means that, with the right timing, a thread running vcpu-0 can enter the kernel without a GIC configured and get a GIC created behind its back by another thread. Amusingly, it will pick up that GIC and start messing with the data structures without the GIC having been fully initialised.
Huh, I'm definitely missing something. Could you remind me where we open up this race between KVM_RUN && kvm_vgic_create()?
Ah, duh, I see it now. kvm_arch_vcpu_run_pid_change() doesn't serialize on a VM lock, and kvm_vgic_map_resources() has an early return for vgic_ready() letting it blow straight past the config_lock.
Then if we can't register the MMIO region for the distributor everything comes crashing down and a vCPU has made it into the KVM_RUN loop w/ the VGIC-shaped rug pulled out from under it. There's definitely another functional bug here where a vCPU's attempts to poke the distributor wind up reaching userspace as MMIO exits. But we can worry about that another day.
If memory serves, kvm_vgic_map_resources() used to do all of this behind the config_lock to cure the race, but that wound up inverting lock ordering on srcu.
Note to self: Impose strict ordering on GIC initialization v. vCPU creation if/when we get a new flavor of irqchip.
I'd thought the fact that the latter takes all the vCPU mutexes and checks if any vCPU in the VM has run would be enough to guard against such a race, but clearly not...
Any chance that fixing bugs where vCPU0 can be accessed (and run!) before its fully online help?
That's an equally gross bug, but kvm_vgic_create() should still be safe w.r.t. vCPU creation since both hold the kvm->lock in the right spot. That is, since kvm_vgic_create() is called under the lock any vCPUs visible to userspace should exist in the vCPU xarray.
The crappy assumption here is kvm_arch_vcpu_run_pid_change() and its callees are allowed to destroy VM-scoped structures in error handling.
E.g. if that closes the vCPU0 hole, maybe the vCPU1 case can be handled a bit more gracefully?
I think this is about as graceful as we can be. The sorts of screw-ups that precipitate this error handling may involve stupidity across several KVM ioctls, meaning it is highly unlikely to be attributable / recoverable.
On Wed, Oct 09, 2024 at 11:27:52PM +0000, Oliver Upton wrote:
On Wed, Oct 09, 2024 at 12:36:32PM -0700, Sean Christopherson wrote:
On Wed, Oct 09, 2024, Oliver Upton wrote:
On Wed, Oct 09, 2024 at 07:36:03PM +0100, Marc Zyngier wrote:
As there is very little ordering in the KVM API, userspace can instanciate a half-baked GIC (missing its memory map, for example) at almost any time.
This means that, with the right timing, a thread running vcpu-0 can enter the kernel without a GIC configured and get a GIC created behind its back by another thread. Amusingly, it will pick up that GIC and start messing with the data structures without the GIC having been fully initialised.
Huh, I'm definitely missing something. Could you remind me where we open up this race between KVM_RUN && kvm_vgic_create()?
Ah, duh, I see it now. kvm_arch_vcpu_run_pid_change() doesn't serialize on a VM lock, and kvm_vgic_map_resources() has an early return for vgic_ready() letting it blow straight past the config_lock.
Then if we can't register the MMIO region for the distributor everything comes crashing down and a vCPU has made it into the KVM_RUN loop w/ the VGIC-shaped rug pulled out from under it. There's definitely another functional bug here where a vCPU's attempts to poke the
a theoretical bug, that is. In practice the window to race against likely isn't big enough to get the in-guest vCPU to the point of poking the halfway-initialized distributor.
distributor wind up reaching userspace as MMIO exits. But we can worry about that another day.
If memory serves, kvm_vgic_map_resources() used to do all of this behind the config_lock to cure the race, but that wound up inverting lock ordering on srcu.
Note to self: Impose strict ordering on GIC initialization v. vCPU creation if/when we get a new flavor of irqchip.
I'd thought the fact that the latter takes all the vCPU mutexes and checks if any vCPU in the VM has run would be enough to guard against such a race, but clearly not...
Any chance that fixing bugs where vCPU0 can be accessed (and run!) before its fully online help?
That's an equally gross bug, but kvm_vgic_create() should still be safe w.r.t. vCPU creation since both hold the kvm->lock in the right spot. That is, since kvm_vgic_create() is called under the lock any vCPUs visible to userspace should exist in the vCPU xarray.
The crappy assumption here is kvm_arch_vcpu_run_pid_change() and its callees are allowed to destroy VM-scoped structures in error handling.
E.g. if that closes the vCPU0 hole, maybe the vCPU1 case can be handled a bit more gracefully?
I think this is about as graceful as we can be. The sorts of screw-ups that precipitate this error handling may involve stupidity across several KVM ioctls, meaning it is highly unlikely to be attributable / recoverable.
-- Thanks, Oliver
On Thu, 10 Oct 2024 00:27:46 +0100, Oliver Upton oliver.upton@linux.dev wrote:
On Wed, Oct 09, 2024 at 12:36:32PM -0700, Sean Christopherson wrote:
On Wed, Oct 09, 2024, Oliver Upton wrote:
On Wed, Oct 09, 2024 at 07:36:03PM +0100, Marc Zyngier wrote:
As there is very little ordering in the KVM API, userspace can instanciate a half-baked GIC (missing its memory map, for example) at almost any time.
This means that, with the right timing, a thread running vcpu-0 can enter the kernel without a GIC configured and get a GIC created behind its back by another thread. Amusingly, it will pick up that GIC and start messing with the data structures without the GIC having been fully initialised.
Huh, I'm definitely missing something. Could you remind me where we open up this race between KVM_RUN && kvm_vgic_create()?
Sorry, I sent the patch bombs away and decided to get my life back for the evening... Doesn't help that the commit message isn't very clear (if not wrong in some respects),.
Ah, duh, I see it now. kvm_arch_vcpu_run_pid_change() doesn't serialize on a VM lock, and kvm_vgic_map_resources() has an early return for vgic_ready() letting it blow straight past the config_lock.
That. The problem is not so much with the vgic creation (which doesn't do much) but with the vgic_init() part followed by the map_resources horror.
Then if we can't register the MMIO region for the distributor everything comes crashing down and a vCPU has made it into the KVM_RUN loop w/ the VGIC-shaped rug pulled out from under it. There's definitely another functional bug here where a vCPU's attempts to poke the distributor wind up reaching userspace as MMIO exits. But we can worry about that another day.
I don't think that one is that bad. Userspace got us here, and they now see an MMIO exit for something that it is not prepared to handle. Suck it up and die (on a black size M t-shirt, please).
If memory serves, kvm_vgic_map_resources() used to do all of this behind the config_lock to cure the race, but that wound up inverting lock ordering on srcu.
Probably something like that. We also used to hold the kvm lock, which made everything much simpler, but awfully wrong.
Note to self: Impose strict ordering on GIC initialization v. vCPU creation if/when we get a new flavor of irqchip.
One of the things we should have done when introducing GICv3 is to impose that at KVM_DEV_ARM_VGIC_CTRL_INIT, the GIC memory map is final. I remember some push-back on the QEMU side of things, as they like to decouple things, but this has proved to be a nightmare.
I'd thought the fact that the latter takes all the vCPU mutexes and checks if any vCPU in the VM has run would be enough to guard against such a race, but clearly not...
Any chance that fixing bugs where vCPU0 can be accessed (and run!) before its fully online help?
That's an equally gross bug, but kvm_vgic_create() should still be safe w.r.t. vCPU creation since both hold the kvm->lock in the right spot. That is, since kvm_vgic_create() is called under the lock any vCPUs visible to userspace should exist in the vCPU xarray.
The crappy assumption here is kvm_arch_vcpu_run_pid_change() and its callees are allowed to destroy VM-scoped structures in error handling.
I think this is symptomatic of more general issue: we perform VM-wide configuration in the context of a vcpu. We have tons of this stuff to paper over the lack of a "this VM is fully configured" barrier.
I wonder whether we could sidestep things by punting the finalisation of the VM to a different context (workqueue?) and simply return -EAGAIN or -EINTR to userspace while we're processing it. That doesn't solve the "I'm missing parts of the address map and I'm going to die" part though.
E.g. if that closes the vCPU0 hole, maybe the vCPU1 case can be handled a bit more gracefully?
I think this is about as graceful as we can be. The sorts of screw-ups that precipitate this error handling may involve stupidity across several KVM ioctls, meaning it is highly unlikely to be attributable / recoverable.
That's my take as well. We're faced with luserspace that's out to get us, and by the time we're in the context of a vcpu, it is too late.
I don't see how to fix this without mandating a UABI change.
Thanks,
M.
On Thu, Oct 10, 2024 at 08:54:43AM +0100, Marc Zyngier wrote:
On Thu, 10 Oct 2024 00:27:46 +0100, Oliver Upton oliver.upton@linux.dev wrote:
Then if we can't register the MMIO region for the distributor everything comes crashing down and a vCPU has made it into the KVM_RUN loop w/ the VGIC-shaped rug pulled out from under it. There's definitely another functional bug here where a vCPU's attempts to poke the distributor wind up reaching userspace as MMIO exits. But we can worry about that another day.
I don't think that one is that bad. Userspace got us here, and they now see an MMIO exit for something that it is not prepared to handle. Suck it up and die (on a black size M t-shirt, please).
LOL, I'll remember that.
The situation I have in mind is a bit harder to blame on userspace, though. Supposing that the whole VM was set up correctly, multiple vCPUs entering KVM_RUN concurrently could cause this race and have 'unexpected' MMIO exits go out to userspace.
vcpu-0 vcpu-1 ====== ====== kvm_vgic_map_resources() dist->ready = true mutex_unlock(config_lock) kvm_vgic_map_resources() if (vgic_ready()) return 0
< enter guest > typer = writel(0, GICD_CTLR)
< data abort > kvm_io_bus_write(...) <= No GICD, out to userspace
vgic_register_dist_iodev()
A small but stupid window to race with.
If memory serves, kvm_vgic_map_resources() used to do all of this behind the config_lock to cure the race, but that wound up inverting lock ordering on srcu.
Probably something like that. We also used to hold the kvm lock, which made everything much simpler, but awfully wrong.
Note to self: Impose strict ordering on GIC initialization v. vCPU creation if/when we get a new flavor of irqchip.
One of the things we should have done when introducing GICv3 is to impose that at KVM_DEV_ARM_VGIC_CTRL_INIT, the GIC memory map is final. I remember some push-back on the QEMU side of things, as they like to decouple things, but this has proved to be a nightmare.
Pushing more of the initialization complexity into userspace feels like the right thing. Since we clearly have no idea what we're doing :)
The crappy assumption here is kvm_arch_vcpu_run_pid_change() and its callees are allowed to destroy VM-scoped structures in error handling.
I think this is symptomatic of more general issue: we perform VM-wide configuration in the context of a vcpu. We have tons of this stuff to paper over the lack of a "this VM is fully configured" barrier.
I wonder whether we could sidestep things by punting the finalisation of the VM to a different context (workqueue?) and simply return -EAGAIN or -EINTR to userspace while we're processing it. That doesn't solve the "I'm missing parts of the address map and I'm going to die" part though.
Throwing it back at userspace would be nice, but unfortunately for ABI I think we need to block/spin vCPUs in the kernel til the VM is in fully working condition. A fragile userspace could explode for a 'spurious' EAGAIN/EINTR where there wasn't one before.
On Thu, 10 Oct 2024 09:47:04 +0100, Oliver Upton oliver.upton@linux.dev wrote:
On Thu, Oct 10, 2024 at 08:54:43AM +0100, Marc Zyngier wrote:
On Thu, 10 Oct 2024 00:27:46 +0100, Oliver Upton oliver.upton@linux.dev wrote:
Then if we can't register the MMIO region for the distributor everything comes crashing down and a vCPU has made it into the KVM_RUN loop w/ the VGIC-shaped rug pulled out from under it. There's definitely another functional bug here where a vCPU's attempts to poke the distributor wind up reaching userspace as MMIO exits. But we can worry about that another day.
I don't think that one is that bad. Userspace got us here, and they now see an MMIO exit for something that it is not prepared to handle. Suck it up and die (on a black size M t-shirt, please).
LOL, I'll remember that.
The situation I have in mind is a bit harder to blame on userspace, though. Supposing that the whole VM was set up correctly, multiple vCPUs entering KVM_RUN concurrently could cause this race and have 'unexpected' MMIO exits go out to userspace.
vcpu-0 vcpu-1 ====== ====== kvm_vgic_map_resources() dist->ready = true mutex_unlock(config_lock) kvm_vgic_map_resources() if (vgic_ready()) return 0
< enter guest > typer = writel(0, GICD_CTLR) < data abort > kvm_io_bus_write(...) <= No GICD, out to userspace vgic_register_dist_iodev()
A small but stupid window to race with.
Ah, gotcha. I guess getting rid of the early-out in kvm_vgic_map_resources() would plug that one. Want to post a fix for that?
If memory serves, kvm_vgic_map_resources() used to do all of this behind the config_lock to cure the race, but that wound up inverting lock ordering on srcu.
Probably something like that. We also used to hold the kvm lock, which made everything much simpler, but awfully wrong.
Note to self: Impose strict ordering on GIC initialization v. vCPU creation if/when we get a new flavor of irqchip.
One of the things we should have done when introducing GICv3 is to impose that at KVM_DEV_ARM_VGIC_CTRL_INIT, the GIC memory map is final. I remember some push-back on the QEMU side of things, as they like to decouple things, but this has proved to be a nightmare.
Pushing more of the initialization complexity into userspace feels like the right thing. Since we clearly have no idea what we're doing :)
KVM APIv2?
The crappy assumption here is kvm_arch_vcpu_run_pid_change() and its callees are allowed to destroy VM-scoped structures in error handling.
I think this is symptomatic of more general issue: we perform VM-wide configuration in the context of a vcpu. We have tons of this stuff to paper over the lack of a "this VM is fully configured" barrier.
I wonder whether we could sidestep things by punting the finalisation of the VM to a different context (workqueue?) and simply return -EAGAIN or -EINTR to userspace while we're processing it. That doesn't solve the "I'm missing parts of the address map and I'm going to die" part though.
Throwing it back at userspace would be nice, but unfortunately for ABI I think we need to block/spin vCPUs in the kernel til the VM is in fully working condition. A fragile userspace could explode for a 'spurious' EAGAIN/EINTR where there wasn't one before.
EINTR needs to be handled already, as this is how you report preemption by a signal. But yeah, overall, I'm not enthralled with much so far...
M.
On Thu, Oct 10, 2024 at 01:47:05PM +0100, Marc Zyngier wrote:
On Thu, 10 Oct 2024 09:47:04 +0100, Oliver Upton oliver.upton@linux.dev wrote:
A small but stupid window to race with.
Ah, gotcha. I guess getting rid of the early-out in kvm_vgic_map_resources() would plug that one. Want to post a fix for that?
Yep, will do.
If memory serves, kvm_vgic_map_resources() used to do all of this behind the config_lock to cure the race, but that wound up inverting lock ordering on srcu.
Probably something like that. We also used to hold the kvm lock, which made everything much simpler, but awfully wrong.
Note to self: Impose strict ordering on GIC initialization v. vCPU creation if/when we get a new flavor of irqchip.
One of the things we should have done when introducing GICv3 is to impose that at KVM_DEV_ARM_VGIC_CTRL_INIT, the GIC memory map is final. I remember some push-back on the QEMU side of things, as they like to decouple things, but this has proved to be a nightmare.
Pushing more of the initialization complexity into userspace feels like the right thing. Since we clearly have no idea what we're doing :)
KVM APIv2?
Even better, we can just go straight to v3 and skip all the mistakes we would've made in v2.
The crappy assumption here is kvm_arch_vcpu_run_pid_change() and its callees are allowed to destroy VM-scoped structures in error handling.
I think this is symptomatic of more general issue: we perform VM-wide configuration in the context of a vcpu. We have tons of this stuff to paper over the lack of a "this VM is fully configured" barrier.
I wonder whether we could sidestep things by punting the finalisation of the VM to a different context (workqueue?) and simply return -EAGAIN or -EINTR to userspace while we're processing it. That doesn't solve the "I'm missing parts of the address map and I'm going to die" part though.
Throwing it back at userspace would be nice, but unfortunately for ABI I think we need to block/spin vCPUs in the kernel til the VM is in fully working condition. A fragile userspace could explode for a 'spurious' EAGAIN/EINTR where there wasn't one before.
EINTR needs to be handled already, as this is how you report preemption by a signal.
Of course, I'm just assuming userspace is mean and will complain if no signal actually arrives.
On Wed, 09 Oct 2024 19:36:03 +0100, Marc Zyngier wrote:
As there is very little ordering in the KVM API, userspace can instanciate a half-baked GIC (missing its memory map, for example) at almost any time.
This means that, with the right timing, a thread running vcpu-0 can enter the kernel without a GIC configured and get a GIC created behind its back by another thread. Amusingly, it will pick up that GIC and start messing with the data structures without the GIC having been fully initialised.
[...]
Applied to fixes, thanks!
[1/1] KVM: arm64: Don't eagerly teardown the vgic on init error commit: df5fd75ee305cb5927e0b1a0b46cc988ad8db2b1
Cheers,
M.
linux-stable-mirror@lists.linaro.org