From: Rob Clark robdclark@chromium.org
In some cases, like double-buffered rendering, missing vblanks can trick the GPU into running at a lower frequence, when really we want to be running at a higher frequency to not miss the vblanks in the first place.
This is partially inspired by a trick i915 does, but implemented via dma-fence for a couple of reasons:
1) To continue to be able to use the atomic helpers 2) To support cases where display and gpu are different drivers
The last patch is just proof of concept, in reality I think it may want to be a bit more clever. But sending this out as it is as an RFC to get feedback.
Rob Clark (3): dma-fence: Add boost fence op drm/atomic: Call dma_fence_boost() when we've missed a vblank drm/msm: Wire up gpu boost
drivers/gpu/drm/drm_atomic_helper.c | 11 +++++++++++ drivers/gpu/drm/msm/msm_fence.c | 10 ++++++++++ drivers/gpu/drm/msm/msm_gpu.c | 13 +++++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 2 ++ include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++ 5 files changed, 62 insertions(+)
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler that a fence waiter has missed a deadline waiting on the fence.
In some cases, missing a vblank can result in lower gpu utilization, when really we want to go in the opposite direction and boost gpu freq. The boost callback gives some feedback to the fence signaler that we are missing deadlines, so it can take this into account in it's freq/ utilization calculations.
Signed-off-by: Rob Clark robdclark@chromium.org --- include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+)
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 9f12efaaa93a..172702521acc 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -231,6 +231,17 @@ struct dma_fence_ops { signed long (*wait)(struct dma_fence *fence, bool intr, signed long timeout);
+ /** + * @boost: + * + * Optional callback, to indicate that a fence waiter missed a deadline. + * This can serve as a signal that (if possible) whatever signals the + * fence should boost it's clocks. + * + * This can be called in any context that can call dma_fence_wait(). + */ + void (*boost)(struct dma_fence *fence); + /** * @release: * @@ -586,6 +597,21 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) return ret < 0 ? ret : 0; }
+/** + * dma_fence_boost - hint from waiter that it missed a deadline + * + * @fence: the fence that caused the missed deadline + * + * This function gives a hint from a fence waiter that a deadline was + * missed, so that the fence signaler can factor this in to device + * power state decisions + */ +static inline void dma_fence_boost(struct dma_fence *fence) +{ + if (fence->ops->boost) + fence->ops->boost(fence); +} + struct dma_fence *dma_fence_get_stub(void); u64 dma_fence_context_alloc(unsigned num);
Uff, that looks very hardware specific to me.
As far as I can see you can also implement completely inside the backend by starting a timer on enable_signaling, don't you?
Christian.
Am 19.05.21 um 20:38 schrieb Rob Clark:
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler that a fence waiter has missed a deadline waiting on the fence.
In some cases, missing a vblank can result in lower gpu utilization, when really we want to go in the opposite direction and boost gpu freq. The boost callback gives some feedback to the fence signaler that we are missing deadlines, so it can take this into account in it's freq/ utilization calculations.
Signed-off-by: Rob Clark robdclark@chromium.org
include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+)
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 9f12efaaa93a..172702521acc 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -231,6 +231,17 @@ struct dma_fence_ops { signed long (*wait)(struct dma_fence *fence, bool intr, signed long timeout);
- /**
* @boost:
*
* Optional callback, to indicate that a fence waiter missed a deadline.
* This can serve as a signal that (if possible) whatever signals the
* fence should boost it's clocks.
*
* This can be called in any context that can call dma_fence_wait().
*/
- void (*boost)(struct dma_fence *fence);
- /**
- @release:
@@ -586,6 +597,21 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) return ret < 0 ? ret : 0; } +/**
- dma_fence_boost - hint from waiter that it missed a deadline
- @fence: the fence that caused the missed deadline
- This function gives a hint from a fence waiter that a deadline was
- missed, so that the fence signaler can factor this in to device
- power state decisions
- */
+static inline void dma_fence_boost(struct dma_fence *fence) +{
- if (fence->ops->boost)
fence->ops->boost(fence);
+}
- struct dma_fence *dma_fence_get_stub(void); u64 dma_fence_context_alloc(unsigned num);
On Wed, May 19, 2021 at 11:47 PM Christian König christian.koenig@amd.com wrote:
Uff, that looks very hardware specific to me.
Howso? I'm not sure I agree.. and even if it was not useful for some hw, it should be useful for enough drivers (and harm no drivers), so I still think it is a good idea
The fallback plan is to go the i915 route and stop using atomic helpers and do the same thing inside the driver, but that doesn't help any of the cases where you have a separate kms and gpu driver.
As far as I can see you can also implement completely inside the backend by starting a timer on enable_signaling, don't you?
Not really.. I mean, the fact that something waited on a fence could be a useful input signal to gpu freq governor, but it is entirely insufficient..
If the cpu is spending a lot of time waiting on a fence, cpufreq will clock down so you spend less time waiting. And no problem has been solved. You absolutely need the concept of a missed deadline, and a timer doesn't give you that.
BR, -R
Christian.
Am 19.05.21 um 20:38 schrieb Rob Clark:
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler that a fence waiter has missed a deadline waiting on the fence.
In some cases, missing a vblank can result in lower gpu utilization, when really we want to go in the opposite direction and boost gpu freq. The boost callback gives some feedback to the fence signaler that we are missing deadlines, so it can take this into account in it's freq/ utilization calculations.
Signed-off-by: Rob Clark robdclark@chromium.org
include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+)
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 9f12efaaa93a..172702521acc 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -231,6 +231,17 @@ struct dma_fence_ops { signed long (*wait)(struct dma_fence *fence, bool intr, signed long timeout);
/**
* @boost:
*
* Optional callback, to indicate that a fence waiter missed a deadline.
* This can serve as a signal that (if possible) whatever signals the
* fence should boost it's clocks.
*
* This can be called in any context that can call dma_fence_wait().
*/
void (*boost)(struct dma_fence *fence);
/** * @release: *
@@ -586,6 +597,21 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) return ret < 0 ? ret : 0; }
+/**
- dma_fence_boost - hint from waiter that it missed a deadline
- @fence: the fence that caused the missed deadline
- This function gives a hint from a fence waiter that a deadline was
- missed, so that the fence signaler can factor this in to device
- power state decisions
- */
+static inline void dma_fence_boost(struct dma_fence *fence) +{
if (fence->ops->boost)
fence->ops->boost(fence);
+}
- struct dma_fence *dma_fence_get_stub(void); u64 dma_fence_context_alloc(unsigned num);
Am 20.05.21 um 16:07 schrieb Rob Clark:
On Wed, May 19, 2021 at 11:47 PM Christian König christian.koenig@amd.com wrote:
Uff, that looks very hardware specific to me.
Howso? I'm not sure I agree.. and even if it was not useful for some hw, it should be useful for enough drivers (and harm no drivers), so I still think it is a good idea
The fallback plan is to go the i915 route and stop using atomic helpers and do the same thing inside the driver, but that doesn't help any of the cases where you have a separate kms and gpu driver.
Yeah, that's certainly not something we want.
As far as I can see you can also implement completely inside the backend by starting a timer on enable_signaling, don't you?
Not really.. I mean, the fact that something waited on a fence could be a useful input signal to gpu freq governor, but it is entirely insufficient..
If the cpu is spending a lot of time waiting on a fence, cpufreq will clock down so you spend less time waiting. And no problem has been solved. You absolutely need the concept of a missed deadline, and a timer doesn't give you that.
Ok then I probably don't understand the use case here.
What exactly do you try to solve?
Thanks, Christian.
BR, -R
Christian.
Am 19.05.21 um 20:38 schrieb Rob Clark:
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler that a fence waiter has missed a deadline waiting on the fence.
In some cases, missing a vblank can result in lower gpu utilization, when really we want to go in the opposite direction and boost gpu freq. The boost callback gives some feedback to the fence signaler that we are missing deadlines, so it can take this into account in it's freq/ utilization calculations.
Signed-off-by: Rob Clark robdclark@chromium.org
include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+)
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 9f12efaaa93a..172702521acc 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -231,6 +231,17 @@ struct dma_fence_ops { signed long (*wait)(struct dma_fence *fence, bool intr, signed long timeout);
/**
* @boost:
*
* Optional callback, to indicate that a fence waiter missed a deadline.
* This can serve as a signal that (if possible) whatever signals the
* fence should boost it's clocks.
*
* This can be called in any context that can call dma_fence_wait().
*/
void (*boost)(struct dma_fence *fence);
/** * @release: *
@@ -586,6 +597,21 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) return ret < 0 ? ret : 0; }
+/**
- dma_fence_boost - hint from waiter that it missed a deadline
- @fence: the fence that caused the missed deadline
- This function gives a hint from a fence waiter that a deadline was
- missed, so that the fence signaler can factor this in to device
- power state decisions
- */
+static inline void dma_fence_boost(struct dma_fence *fence) +{
if (fence->ops->boost)
fence->ops->boost(fence);
+}
- struct dma_fence *dma_fence_get_stub(void); u64 dma_fence_context_alloc(unsigned num);
On Thu, May 20, 2021 at 7:11 AM Christian König christian.koenig@amd.com wrote:
Am 20.05.21 um 16:07 schrieb Rob Clark:
On Wed, May 19, 2021 at 11:47 PM Christian König christian.koenig@amd.com wrote:
Uff, that looks very hardware specific to me.
Howso? I'm not sure I agree.. and even if it was not useful for some hw, it should be useful for enough drivers (and harm no drivers), so I still think it is a good idea
The fallback plan is to go the i915 route and stop using atomic helpers and do the same thing inside the driver, but that doesn't help any of the cases where you have a separate kms and gpu driver.
Yeah, that's certainly not something we want.
As far as I can see you can also implement completely inside the backend by starting a timer on enable_signaling, don't you?
Not really.. I mean, the fact that something waited on a fence could be a useful input signal to gpu freq governor, but it is entirely insufficient..
If the cpu is spending a lot of time waiting on a fence, cpufreq will clock down so you spend less time waiting. And no problem has been solved. You absolutely need the concept of a missed deadline, and a timer doesn't give you that.
Ok then I probably don't understand the use case here.
What exactly do you try to solve?
Basically situations where you are ping-ponging between GPU and CPU.. for example if you are double buffering instead of triple buffering, and doing vblank sync'd pageflips. The GPU, without any extra signal, could get stuck at 30fps and a low gpu freq, because it ends up idle while waiting for an extra vblank cycle for the next back-buffer to become available. Whereas if it boosted up to a higher freq and stopped missing a vblank deadline, it would be less idle due to getting the next back-buffer sooner (due to not missing a vblank deadline).
BR, -R
Thanks, Christian.
BR, -R
Christian.
Am 19.05.21 um 20:38 schrieb Rob Clark:
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler that a fence waiter has missed a deadline waiting on the fence.
In some cases, missing a vblank can result in lower gpu utilization, when really we want to go in the opposite direction and boost gpu freq. The boost callback gives some feedback to the fence signaler that we are missing deadlines, so it can take this into account in it's freq/ utilization calculations.
Signed-off-by: Rob Clark robdclark@chromium.org
include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+)
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 9f12efaaa93a..172702521acc 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -231,6 +231,17 @@ struct dma_fence_ops { signed long (*wait)(struct dma_fence *fence, bool intr, signed long timeout);
/**
* @boost:
*
* Optional callback, to indicate that a fence waiter missed a deadline.
* This can serve as a signal that (if possible) whatever signals the
* fence should boost it's clocks.
*
* This can be called in any context that can call dma_fence_wait().
*/
void (*boost)(struct dma_fence *fence);
/** * @release: *
@@ -586,6 +597,21 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) return ret < 0 ? ret : 0; }
+/**
- dma_fence_boost - hint from waiter that it missed a deadline
- @fence: the fence that caused the missed deadline
- This function gives a hint from a fence waiter that a deadline was
- missed, so that the fence signaler can factor this in to device
- power state decisions
- */
+static inline void dma_fence_boost(struct dma_fence *fence) +{
if (fence->ops->boost)
fence->ops->boost(fence);
+}
- struct dma_fence *dma_fence_get_stub(void); u64 dma_fence_context_alloc(unsigned num);
Am 20.05.21 um 16:54 schrieb Rob Clark:
On Thu, May 20, 2021 at 7:11 AM Christian König christian.koenig@amd.com wrote:
Am 20.05.21 um 16:07 schrieb Rob Clark:
On Wed, May 19, 2021 at 11:47 PM Christian König christian.koenig@amd.com wrote:
Uff, that looks very hardware specific to me.
Howso? I'm not sure I agree.. and even if it was not useful for some hw, it should be useful for enough drivers (and harm no drivers), so I still think it is a good idea
The fallback plan is to go the i915 route and stop using atomic helpers and do the same thing inside the driver, but that doesn't help any of the cases where you have a separate kms and gpu driver.
Yeah, that's certainly not something we want.
As far as I can see you can also implement completely inside the backend by starting a timer on enable_signaling, don't you?
Not really.. I mean, the fact that something waited on a fence could be a useful input signal to gpu freq governor, but it is entirely insufficient..
If the cpu is spending a lot of time waiting on a fence, cpufreq will clock down so you spend less time waiting. And no problem has been solved. You absolutely need the concept of a missed deadline, and a timer doesn't give you that.
Ok then I probably don't understand the use case here.
What exactly do you try to solve?
Basically situations where you are ping-ponging between GPU and CPU.. for example if you are double buffering instead of triple buffering, and doing vblank sync'd pageflips. The GPU, without any extra signal, could get stuck at 30fps and a low gpu freq, because it ends up idle while waiting for an extra vblank cycle for the next back-buffer to become available. Whereas if it boosted up to a higher freq and stopped missing a vblank deadline, it would be less idle due to getting the next back-buffer sooner (due to not missing a vblank deadline).
Ok the is the why, but what about the how?
How does it help to have this boost callback and not just start a time on enable signaling and stop it when the signal arrives?
Regards, Christian.
BR, -R
Thanks, Christian.
BR, -R
Christian.
Am 19.05.21 um 20:38 schrieb Rob Clark:
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler that a fence waiter has missed a deadline waiting on the fence.
In some cases, missing a vblank can result in lower gpu utilization, when really we want to go in the opposite direction and boost gpu freq. The boost callback gives some feedback to the fence signaler that we are missing deadlines, so it can take this into account in it's freq/ utilization calculations.
Signed-off-by: Rob Clark robdclark@chromium.org
include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+)
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 9f12efaaa93a..172702521acc 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -231,6 +231,17 @@ struct dma_fence_ops { signed long (*wait)(struct dma_fence *fence, bool intr, signed long timeout);
/**
* @boost:
*
* Optional callback, to indicate that a fence waiter missed a deadline.
* This can serve as a signal that (if possible) whatever signals the
* fence should boost it's clocks.
*
* This can be called in any context that can call dma_fence_wait().
*/
void (*boost)(struct dma_fence *fence);
/** * @release: *
@@ -586,6 +597,21 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) return ret < 0 ? ret : 0; }
+/**
- dma_fence_boost - hint from waiter that it missed a deadline
- @fence: the fence that caused the missed deadline
- This function gives a hint from a fence waiter that a deadline was
- missed, so that the fence signaler can factor this in to device
- power state decisions
- */
+static inline void dma_fence_boost(struct dma_fence *fence) +{
if (fence->ops->boost)
fence->ops->boost(fence);
+}
- struct dma_fence *dma_fence_get_stub(void); u64 dma_fence_context_alloc(unsigned num);
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-mm-sig
On Thu, May 20, 2021 at 06:01:39PM +0200, Christian König wrote:
Am 20.05.21 um 16:54 schrieb Rob Clark:
On Thu, May 20, 2021 at 7:11 AM Christian König christian.koenig@amd.com wrote:
Am 20.05.21 um 16:07 schrieb Rob Clark:
On Wed, May 19, 2021 at 11:47 PM Christian König christian.koenig@amd.com wrote:
Uff, that looks very hardware specific to me.
Howso? I'm not sure I agree.. and even if it was not useful for some hw, it should be useful for enough drivers (and harm no drivers), so I still think it is a good idea
The fallback plan is to go the i915 route and stop using atomic helpers and do the same thing inside the driver, but that doesn't help any of the cases where you have a separate kms and gpu driver.
Yeah, that's certainly not something we want.
As far as I can see you can also implement completely inside the backend by starting a timer on enable_signaling, don't you?
Not really.. I mean, the fact that something waited on a fence could be a useful input signal to gpu freq governor, but it is entirely insufficient..
If the cpu is spending a lot of time waiting on a fence, cpufreq will clock down so you spend less time waiting. And no problem has been solved. You absolutely need the concept of a missed deadline, and a timer doesn't give you that.
Ok then I probably don't understand the use case here.
What exactly do you try to solve?
Basically situations where you are ping-ponging between GPU and CPU.. for example if you are double buffering instead of triple buffering, and doing vblank sync'd pageflips. The GPU, without any extra signal, could get stuck at 30fps and a low gpu freq, because it ends up idle while waiting for an extra vblank cycle for the next back-buffer to become available. Whereas if it boosted up to a higher freq and stopped missing a vblank deadline, it would be less idle due to getting the next back-buffer sooner (due to not missing a vblank deadline).
Ok the is the why, but what about the how?
How does it help to have this boost callback and not just start a time on enable signaling and stop it when the signal arrives?
Because the render side (or drm/scheduler, if msm would use that) has no idea for which vblank a rendering actually is for.
So boosting right when you've missed your frame (not what Rob implements currently, but fixable) is the right semantics.
The other issue is that for cpu waits, we want to differentiate from fence waits that userspace does intentially (e.g. wait ioctl) and waits that random other things are doing within the kernel to keep track of progress.
For the former we know that userspace is stuck waiting for the gpu, and we probably want to boost. For the latter we most definitely do _not_ want to boost.
Otoh I do agree with you that the current api is a bit awkward, so perhaps we do need a dma_fence_userspace_wait wrapper which boosts automatically after a bit. And similarly perhaps a drm_vblank_dma_fence_wait, where you give it a vblank target, and if the fence isn't signalled by then, we kick it real hard.
But otherwise yes this is absolutely a thing that matters a ton. If you look at Matt Brost's scheduler rfc, there's also a line item in there about adding this kind of boosting to drm/scheduler. -Daniel
Regards, Christian.
BR, -R
Thanks, Christian.
BR, -R
Christian.
Am 19.05.21 um 20:38 schrieb Rob Clark:
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler that a fence waiter has missed a deadline waiting on the fence.
In some cases, missing a vblank can result in lower gpu utilization, when really we want to go in the opposite direction and boost gpu freq. The boost callback gives some feedback to the fence signaler that we are missing deadlines, so it can take this into account in it's freq/ utilization calculations.
Signed-off-by: Rob Clark robdclark@chromium.org
include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+)
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 9f12efaaa93a..172702521acc 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -231,6 +231,17 @@ struct dma_fence_ops { signed long (*wait)(struct dma_fence *fence, bool intr, signed long timeout);
/**
* @boost:
*
* Optional callback, to indicate that a fence waiter missed a deadline.
* This can serve as a signal that (if possible) whatever signals the
* fence should boost it's clocks.
*
* This can be called in any context that can call dma_fence_wait().
*/
void (*boost)(struct dma_fence *fence);
/** * @release: *
@@ -586,6 +597,21 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) return ret < 0 ? ret : 0; }
+/**
- dma_fence_boost - hint from waiter that it missed a deadline
- @fence: the fence that caused the missed deadline
- This function gives a hint from a fence waiter that a deadline was
- missed, so that the fence signaler can factor this in to device
- power state decisions
- */
+static inline void dma_fence_boost(struct dma_fence *fence) +{
if (fence->ops->boost)
fence->ops->boost(fence);
+}
- struct dma_fence *dma_fence_get_stub(void); u64 dma_fence_context_alloc(unsigned num);
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-mm-sig
Am 20.05.21 um 18:34 schrieb Daniel Vetter:
On Thu, May 20, 2021 at 06:01:39PM +0200, Christian König wrote:
Am 20.05.21 um 16:54 schrieb Rob Clark:
On Thu, May 20, 2021 at 7:11 AM Christian König christian.koenig@amd.com wrote:
Am 20.05.21 um 16:07 schrieb Rob Clark:
On Wed, May 19, 2021 at 11:47 PM Christian König christian.koenig@amd.com wrote:
Uff, that looks very hardware specific to me.
Howso? I'm not sure I agree.. and even if it was not useful for some hw, it should be useful for enough drivers (and harm no drivers), so I still think it is a good idea
The fallback plan is to go the i915 route and stop using atomic helpers and do the same thing inside the driver, but that doesn't help any of the cases where you have a separate kms and gpu driver.
Yeah, that's certainly not something we want.
As far as I can see you can also implement completely inside the backend by starting a timer on enable_signaling, don't you?
Not really.. I mean, the fact that something waited on a fence could be a useful input signal to gpu freq governor, but it is entirely insufficient..
If the cpu is spending a lot of time waiting on a fence, cpufreq will clock down so you spend less time waiting. And no problem has been solved. You absolutely need the concept of a missed deadline, and a timer doesn't give you that.
Ok then I probably don't understand the use case here.
What exactly do you try to solve?
Basically situations where you are ping-ponging between GPU and CPU.. for example if you are double buffering instead of triple buffering, and doing vblank sync'd pageflips. The GPU, without any extra signal, could get stuck at 30fps and a low gpu freq, because it ends up idle while waiting for an extra vblank cycle for the next back-buffer to become available. Whereas if it boosted up to a higher freq and stopped missing a vblank deadline, it would be less idle due to getting the next back-buffer sooner (due to not missing a vblank deadline).
Ok the is the why, but what about the how?
How does it help to have this boost callback and not just start a time on enable signaling and stop it when the signal arrives?
Because the render side (or drm/scheduler, if msm would use that) has no idea for which vblank a rendering actually is for.
AH! So we are basically telling the fence backend that we have just missed an event we waited for.
So what we want to know is how long the frontend wanted to wait instead of how long the backend took for rendering.
So boosting right when you've missed your frame (not what Rob implements currently, but fixable) is the right semantics.
The other issue is that for cpu waits, we want to differentiate from fence waits that userspace does intentially (e.g. wait ioctl) and waits that random other things are doing within the kernel to keep track of progress.
For the former we know that userspace is stuck waiting for the gpu, and we probably want to boost. For the latter we most definitely do _not_ want to boost.
Otoh I do agree with you that the current api is a bit awkward, so perhaps we do need a dma_fence_userspace_wait wrapper which boosts automatically after a bit. And similarly perhaps a drm_vblank_dma_fence_wait, where you give it a vblank target, and if the fence isn't signalled by then, we kick it real hard.
Yeah, something like an use case driven API would be nice to have.
For this particular case I suggest that we somehow extend the enable signaling callback.
But otherwise yes this is absolutely a thing that matters a ton. If you look at Matt Brost's scheduler rfc, there's also a line item in there about adding this kind of boosting to drm/scheduler.
BTW: I still can't see this in my inbox.
Do you have a link?
Christian.
-Daniel
Regards, Christian.
BR, -R
Thanks, Christian.
BR, -R
Christian.
Am 19.05.21 um 20:38 schrieb Rob Clark: > From: Rob Clark robdclark@chromium.org > > Add a way to hint to the fence signaler that a fence waiter has missed a > deadline waiting on the fence. > > In some cases, missing a vblank can result in lower gpu utilization, > when really we want to go in the opposite direction and boost gpu freq. > The boost callback gives some feedback to the fence signaler that we > are missing deadlines, so it can take this into account in it's freq/ > utilization calculations. > > Signed-off-by: Rob Clark robdclark@chromium.org > --- > include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++ > 1 file changed, 26 insertions(+) > > diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h > index 9f12efaaa93a..172702521acc 100644 > --- a/include/linux/dma-fence.h > +++ b/include/linux/dma-fence.h > @@ -231,6 +231,17 @@ struct dma_fence_ops { > signed long (*wait)(struct dma_fence *fence, > bool intr, signed long timeout); > > + /** > + * @boost: > + * > + * Optional callback, to indicate that a fence waiter missed a deadline. > + * This can serve as a signal that (if possible) whatever signals the > + * fence should boost it's clocks. > + * > + * This can be called in any context that can call dma_fence_wait(). > + */ > + void (*boost)(struct dma_fence *fence); > + > /** > * @release: > * > @@ -586,6 +597,21 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) > return ret < 0 ? ret : 0; > } > > +/** > + * dma_fence_boost - hint from waiter that it missed a deadline > + * > + * @fence: the fence that caused the missed deadline > + * > + * This function gives a hint from a fence waiter that a deadline was > + * missed, so that the fence signaler can factor this in to device > + * power state decisions > + */ > +static inline void dma_fence_boost(struct dma_fence *fence) > +{ > + if (fence->ops->boost) > + fence->ops->boost(fence); > +} > + > struct dma_fence *dma_fence_get_stub(void); > u64 dma_fence_context_alloc(unsigned num); >
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.lina...
On Thu, May 20, 2021 at 6:41 PM Christian König christian.koenig@amd.com wrote:
Am 20.05.21 um 18:34 schrieb Daniel Vetter:
On Thu, May 20, 2021 at 06:01:39PM +0200, Christian König wrote:
Am 20.05.21 um 16:54 schrieb Rob Clark:
On Thu, May 20, 2021 at 7:11 AM Christian König christian.koenig@amd.com wrote:
Am 20.05.21 um 16:07 schrieb Rob Clark:
On Wed, May 19, 2021 at 11:47 PM Christian König christian.koenig@amd.com wrote: > Uff, that looks very hardware specific to me. Howso? I'm not sure I agree.. and even if it was not useful for some hw, it should be useful for enough drivers (and harm no drivers), so I still think it is a good idea
The fallback plan is to go the i915 route and stop using atomic helpers and do the same thing inside the driver, but that doesn't help any of the cases where you have a separate kms and gpu driver.
Yeah, that's certainly not something we want.
> As far as I can see you can also implement completely inside the backend > by starting a timer on enable_signaling, don't you? Not really.. I mean, the fact that something waited on a fence could be a useful input signal to gpu freq governor, but it is entirely insufficient..
If the cpu is spending a lot of time waiting on a fence, cpufreq will clock down so you spend less time waiting. And no problem has been solved. You absolutely need the concept of a missed deadline, and a timer doesn't give you that.
Ok then I probably don't understand the use case here.
What exactly do you try to solve?
Basically situations where you are ping-ponging between GPU and CPU.. for example if you are double buffering instead of triple buffering, and doing vblank sync'd pageflips. The GPU, without any extra signal, could get stuck at 30fps and a low gpu freq, because it ends up idle while waiting for an extra vblank cycle for the next back-buffer to become available. Whereas if it boosted up to a higher freq and stopped missing a vblank deadline, it would be less idle due to getting the next back-buffer sooner (due to not missing a vblank deadline).
Ok the is the why, but what about the how?
How does it help to have this boost callback and not just start a time on enable signaling and stop it when the signal arrives?
Because the render side (or drm/scheduler, if msm would use that) has no idea for which vblank a rendering actually is for.
AH! So we are basically telling the fence backend that we have just missed an event we waited for.
So what we want to know is how long the frontend wanted to wait instead of how long the backend took for rendering.
tbh I'm not sure the timestamp matters at all. What we do in i915 is boost quite aggressively, and then let the usual clock tuning wittle it down if we overshot. Plus soom cool-down to prevent abuse/continuous boosting. I think we also differentiate between display boost and userspace waits.
On the display side we also wait until the vblank has passed we aimed for (atm always the next, we don't have target_frame support like amdgpu), to avoid boosting when there's no point.
So boosting right when you've missed your frame (not what Rob implements currently, but fixable) is the right semantics.
The other issue is that for cpu waits, we want to differentiate from fence waits that userspace does intentially (e.g. wait ioctl) and waits that random other things are doing within the kernel to keep track of progress.
For the former we know that userspace is stuck waiting for the gpu, and we probably want to boost. For the latter we most definitely do _not_ want to boost.
Otoh I do agree with you that the current api is a bit awkward, so perhaps we do need a dma_fence_userspace_wait wrapper which boosts automatically after a bit. And similarly perhaps a drm_vblank_dma_fence_wait, where you give it a vblank target, and if the fence isn't signalled by then, we kick it real hard.
Yeah, something like an use case driven API would be nice to have.
For this particular case I suggest that we somehow extend the enable signaling callback.
But otherwise yes this is absolutely a thing that matters a ton. If you look at Matt Brost's scheduler rfc, there's also a line item in there about adding this kind of boosting to drm/scheduler.
BTW: I still can't see this in my inbox.
You've replied already:
https://lore.kernel.org/dri-devel/20210518235830.133834-1-matthew.brost@inte...
It's just the big picture plan of what areas we're all trying to tackle with some why, so that everyone knows what's coming in the next half year at least. Probably longer until this is all sorted. I think Matt has some poc hacked-up pile, but nothing really to show. -Daniel
Do you have a link?
Christian.
-Daniel
Regards, Christian.
BR, -R
Thanks, Christian.
BR, -R
> Christian. > > Am 19.05.21 um 20:38 schrieb Rob Clark: >> From: Rob Clark robdclark@chromium.org >> >> Add a way to hint to the fence signaler that a fence waiter has missed a >> deadline waiting on the fence. >> >> In some cases, missing a vblank can result in lower gpu utilization, >> when really we want to go in the opposite direction and boost gpu freq. >> The boost callback gives some feedback to the fence signaler that we >> are missing deadlines, so it can take this into account in it's freq/ >> utilization calculations. >> >> Signed-off-by: Rob Clark robdclark@chromium.org >> --- >> include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++ >> 1 file changed, 26 insertions(+) >> >> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h >> index 9f12efaaa93a..172702521acc 100644 >> --- a/include/linux/dma-fence.h >> +++ b/include/linux/dma-fence.h >> @@ -231,6 +231,17 @@ struct dma_fence_ops { >> signed long (*wait)(struct dma_fence *fence, >> bool intr, signed long timeout); >> >> + /** >> + * @boost: >> + * >> + * Optional callback, to indicate that a fence waiter missed a deadline. >> + * This can serve as a signal that (if possible) whatever signals the >> + * fence should boost it's clocks. >> + * >> + * This can be called in any context that can call dma_fence_wait(). >> + */ >> + void (*boost)(struct dma_fence *fence); >> + >> /** >> * @release: >> * >> @@ -586,6 +597,21 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) >> return ret < 0 ? ret : 0; >> } >> >> +/** >> + * dma_fence_boost - hint from waiter that it missed a deadline >> + * >> + * @fence: the fence that caused the missed deadline >> + * >> + * This function gives a hint from a fence waiter that a deadline was >> + * missed, so that the fence signaler can factor this in to device >> + * power state decisions >> + */ >> +static inline void dma_fence_boost(struct dma_fence *fence) >> +{ >> + if (fence->ops->boost) >> + fence->ops->boost(fence); >> +} >> + >> struct dma_fence *dma_fence_get_stub(void); >> u64 dma_fence_context_alloc(unsigned num); >>
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.lina...
Am 20.05.21 um 19:08 schrieb Daniel Vetter:
[SNIP]
AH! So we are basically telling the fence backend that we have just missed an event we waited for.
So what we want to know is how long the frontend wanted to wait instead of how long the backend took for rendering.
tbh I'm not sure the timestamp matters at all. What we do in i915 is boost quite aggressively, and then let the usual clock tuning wittle it down if we overshot. Plus soom cool-down to prevent abuse/continuous boosting. I think we also differentiate between display boost and userspace waits.
I was not thinking about time stamps here, but more like which information we need at which place.
On the display side we also wait until the vblank has passed we aimed for (atm always the next, we don't have target_frame support like amdgpu), to avoid boosting when there's no point.
So boosting right when you've missed your frame (not what Rob implements currently, but fixable) is the right semantics.
The other issue is that for cpu waits, we want to differentiate from fence waits that userspace does intentially (e.g. wait ioctl) and waits that random other things are doing within the kernel to keep track of progress.
For the former we know that userspace is stuck waiting for the gpu, and we probably want to boost. For the latter we most definitely do _not_ want to boost.
Otoh I do agree with you that the current api is a bit awkward, so perhaps we do need a dma_fence_userspace_wait wrapper which boosts automatically after a bit. And similarly perhaps a drm_vblank_dma_fence_wait, where you give it a vblank target, and if the fence isn't signalled by then, we kick it real hard.
Yeah, something like an use case driven API would be nice to have.
For this particular case I suggest that we somehow extend the enable signaling callback.
But otherwise yes this is absolutely a thing that matters a ton. If you look at Matt Brost's scheduler rfc, there's also a line item in there about adding this kind of boosting to drm/scheduler.
BTW: I still can't see this in my inbox.
You've replied already:
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kerne...
Yeah, but doesn't that also require some changes to the DRM scheduler?
I was expecting that this is a bit more than just two patches.
Christian.
It's just the big picture plan of what areas we're all trying to tackle with some why, so that everyone knows what's coming in the next half year at least. Probably longer until this is all sorted. I think Matt has some poc hacked-up pile, but nothing really to show. -Daniel
Do you have a link?
Christian.
-Daniel
Regards, Christian.
BR, -R
Thanks, Christian.
> BR, > -R > >> Christian. >> >> Am 19.05.21 um 20:38 schrieb Rob Clark: >>> From: Rob Clark robdclark@chromium.org >>> >>> Add a way to hint to the fence signaler that a fence waiter has missed a >>> deadline waiting on the fence. >>> >>> In some cases, missing a vblank can result in lower gpu utilization, >>> when really we want to go in the opposite direction and boost gpu freq. >>> The boost callback gives some feedback to the fence signaler that we >>> are missing deadlines, so it can take this into account in it's freq/ >>> utilization calculations. >>> >>> Signed-off-by: Rob Clark robdclark@chromium.org >>> --- >>> include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++ >>> 1 file changed, 26 insertions(+) >>> >>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h >>> index 9f12efaaa93a..172702521acc 100644 >>> --- a/include/linux/dma-fence.h >>> +++ b/include/linux/dma-fence.h >>> @@ -231,6 +231,17 @@ struct dma_fence_ops { >>> signed long (*wait)(struct dma_fence *fence, >>> bool intr, signed long timeout); >>> >>> + /** >>> + * @boost: >>> + * >>> + * Optional callback, to indicate that a fence waiter missed a deadline. >>> + * This can serve as a signal that (if possible) whatever signals the >>> + * fence should boost it's clocks. >>> + * >>> + * This can be called in any context that can call dma_fence_wait(). >>> + */ >>> + void (*boost)(struct dma_fence *fence); >>> + >>> /** >>> * @release: >>> * >>> @@ -586,6 +597,21 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) >>> return ret < 0 ? ret : 0; >>> } >>> >>> +/** >>> + * dma_fence_boost - hint from waiter that it missed a deadline >>> + * >>> + * @fence: the fence that caused the missed deadline >>> + * >>> + * This function gives a hint from a fence waiter that a deadline was >>> + * missed, so that the fence signaler can factor this in to device >>> + * power state decisions >>> + */ >>> +static inline void dma_fence_boost(struct dma_fence *fence) >>> +{ >>> + if (fence->ops->boost) >>> + fence->ops->boost(fence); >>> +} >>> + >>> struct dma_fence *dma_fence_get_stub(void); >>> u64 dma_fence_context_alloc(unsigned num); >>>
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.lina...
On Fri, May 21, 2021 at 09:43:59AM +0200, Christian König wrote:
Am 20.05.21 um 19:08 schrieb Daniel Vetter:
[SNIP]
AH! So we are basically telling the fence backend that we have just missed an event we waited for.
So what we want to know is how long the frontend wanted to wait instead of how long the backend took for rendering.
tbh I'm not sure the timestamp matters at all. What we do in i915 is boost quite aggressively, and then let the usual clock tuning wittle it down if we overshot. Plus soom cool-down to prevent abuse/continuous boosting. I think we also differentiate between display boost and userspace waits.
I was not thinking about time stamps here, but more like which information we need at which place.
On the display side we also wait until the vblank has passed we aimed for (atm always the next, we don't have target_frame support like amdgpu), to avoid boosting when there's no point.
So boosting right when you've missed your frame (not what Rob implements currently, but fixable) is the right semantics.
The other issue is that for cpu waits, we want to differentiate from fence waits that userspace does intentially (e.g. wait ioctl) and waits that random other things are doing within the kernel to keep track of progress.
For the former we know that userspace is stuck waiting for the gpu, and we probably want to boost. For the latter we most definitely do _not_ want to boost.
Otoh I do agree with you that the current api is a bit awkward, so perhaps we do need a dma_fence_userspace_wait wrapper which boosts automatically after a bit. And similarly perhaps a drm_vblank_dma_fence_wait, where you give it a vblank target, and if the fence isn't signalled by then, we kick it real hard.
Yeah, something like an use case driven API would be nice to have.
For this particular case I suggest that we somehow extend the enable signaling callback.
But otherwise yes this is absolutely a thing that matters a ton. If you look at Matt Brost's scheduler rfc, there's also a line item in there about adding this kind of boosting to drm/scheduler.
BTW: I still can't see this in my inbox.
You've replied already:
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kerne...
Yeah, but doesn't that also require some changes to the DRM scheduler?
I was expecting that this is a bit more than just two patches.
It's just the rfc document, per the new rfc process:
https://dri.freedesktop.org/docs/drm/gpu/rfc/
It's rather obviously not any piece of code in there, but just meant to check rough direction before we go rewrite the entire i915 execbuf frontend. -Daniel
Christian.
It's just the big picture plan of what areas we're all trying to tackle with some why, so that everyone knows what's coming in the next half year at least. Probably longer until this is all sorted. I think Matt has some poc hacked-up pile, but nothing really to show. -Daniel
Do you have a link?
Christian.
-Daniel
Regards, Christian.
BR, -R
> Thanks, > Christian. > > > BR, > > -R > > > > > Christian. > > > > > > Am 19.05.21 um 20:38 schrieb Rob Clark: > > > > From: Rob Clark robdclark@chromium.org > > > > > > > > Add a way to hint to the fence signaler that a fence waiter has missed a > > > > deadline waiting on the fence. > > > > > > > > In some cases, missing a vblank can result in lower gpu utilization, > > > > when really we want to go in the opposite direction and boost gpu freq. > > > > The boost callback gives some feedback to the fence signaler that we > > > > are missing deadlines, so it can take this into account in it's freq/ > > > > utilization calculations. > > > > > > > > Signed-off-by: Rob Clark robdclark@chromium.org > > > > --- > > > > include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++ > > > > 1 file changed, 26 insertions(+) > > > > > > > > diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h > > > > index 9f12efaaa93a..172702521acc 100644 > > > > --- a/include/linux/dma-fence.h > > > > +++ b/include/linux/dma-fence.h > > > > @@ -231,6 +231,17 @@ struct dma_fence_ops { > > > > signed long (*wait)(struct dma_fence *fence, > > > > bool intr, signed long timeout); > > > > > > > > + /** > > > > + * @boost: > > > > + * > > > > + * Optional callback, to indicate that a fence waiter missed a deadline. > > > > + * This can serve as a signal that (if possible) whatever signals the > > > > + * fence should boost it's clocks. > > > > + * > > > > + * This can be called in any context that can call dma_fence_wait(). > > > > + */ > > > > + void (*boost)(struct dma_fence *fence); > > > > + > > > > /** > > > > * @release: > > > > * > > > > @@ -586,6 +597,21 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) > > > > return ret < 0 ? ret : 0; > > > > } > > > > > > > > +/** > > > > + * dma_fence_boost - hint from waiter that it missed a deadline > > > > + * > > > > + * @fence: the fence that caused the missed deadline > > > > + * > > > > + * This function gives a hint from a fence waiter that a deadline was > > > > + * missed, so that the fence signaler can factor this in to device > > > > + * power state decisions > > > > + */ > > > > +static inline void dma_fence_boost(struct dma_fence *fence) > > > > +{ > > > > + if (fence->ops->boost) > > > > + fence->ops->boost(fence); > > > > +} > > > > + > > > > struct dma_fence *dma_fence_get_stub(void); > > > > u64 dma_fence_context_alloc(unsigned num); > > > > _______________________________________________ Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.lina...
On Thu, May 20, 2021 at 4:03 PM Rob Clark robdclark@gmail.com wrote:
On Wed, May 19, 2021 at 11:47 PM Christian König christian.koenig@amd.com wrote:
Uff, that looks very hardware specific to me.
Howso? I'm not sure I agree.. and even if it was not useful for some hw, it should be useful for enough drivers (and harm no drivers), so I still think it is a good idea
The fallback plan is to go the i915 route and stop using atomic helpers and do the same thing inside the driver, but that doesn't help any of the cases where you have a separate kms and gpu driver.
Don't, because the i915 plan is to actually move towards drm/scheduler and atomic helpers.
As far as I can see you can also implement completely inside the backend by starting a timer on enable_signaling, don't you?
Not really.. I mean, the fact that something waited on a fence could be a useful input signal to gpu freq governor, but it is entirely insufficient..
If the cpu is spending a lot of time waiting on a fence, cpufreq will clock down so you spend less time waiting. And no problem has been solved. You absolutely need the concept of a missed deadline, and a timer doesn't give you that.
Yup agreed.
Adding Matt Brost, since he's planning all this boostback work. -Daniel
BR, -R
Christian.
Am 19.05.21 um 20:38 schrieb Rob Clark:
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler that a fence waiter has missed a deadline waiting on the fence.
In some cases, missing a vblank can result in lower gpu utilization, when really we want to go in the opposite direction and boost gpu freq. The boost callback gives some feedback to the fence signaler that we are missing deadlines, so it can take this into account in it's freq/ utilization calculations.
Signed-off-by: Rob Clark robdclark@chromium.org
include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+)
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 9f12efaaa93a..172702521acc 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -231,6 +231,17 @@ struct dma_fence_ops { signed long (*wait)(struct dma_fence *fence, bool intr, signed long timeout);
/**
* @boost:
*
* Optional callback, to indicate that a fence waiter missed a deadline.
* This can serve as a signal that (if possible) whatever signals the
* fence should boost it's clocks.
*
* This can be called in any context that can call dma_fence_wait().
*/
void (*boost)(struct dma_fence *fence);
/** * @release: *
@@ -586,6 +597,21 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) return ret < 0 ? ret : 0; }
+/**
- dma_fence_boost - hint from waiter that it missed a deadline
- @fence: the fence that caused the missed deadline
- This function gives a hint from a fence waiter that a deadline was
- missed, so that the fence signaler can factor this in to device
- power state decisions
- */
+static inline void dma_fence_boost(struct dma_fence *fence) +{
if (fence->ops->boost)
fence->ops->boost(fence);
+}
- struct dma_fence *dma_fence_get_stub(void); u64 dma_fence_context_alloc(unsigned num);
From: Rob Clark robdclark@chromium.org
Note, at this point I haven't given a lot of consideration into how much we should boost, and for how long. And perhaps we should only boost at less than 50% utilization? At this point, this is only an example of dma_fence_boost() implementation.
Signed-off-by: Rob Clark robdclark@chromium.org --- drivers/gpu/drm/msm/msm_fence.c | 10 ++++++++++ drivers/gpu/drm/msm/msm_gpu.c | 13 +++++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 2 ++ 3 files changed, 25 insertions(+)
diff --git a/drivers/gpu/drm/msm/msm_fence.c b/drivers/gpu/drm/msm/msm_fence.c index cd59a5918038..e58895603726 100644 --- a/drivers/gpu/drm/msm/msm_fence.c +++ b/drivers/gpu/drm/msm/msm_fence.c @@ -8,6 +8,7 @@
#include "msm_drv.h" #include "msm_fence.h" +#include "msm_gpu.h"
struct msm_fence_context * @@ -114,10 +115,19 @@ static bool msm_fence_signaled(struct dma_fence *fence) return fence_completed(f->fctx, f->base.seqno); }
+static void msm_fence_boost(struct dma_fence *fence) +{ + struct msm_fence *f = to_msm_fence(fence); + struct msm_drm_private *priv = f->fctx->dev->dev_private; + + msm_gpu_boost(priv->gpu); +} + static const struct dma_fence_ops msm_fence_ops = { .get_driver_name = msm_fence_get_driver_name, .get_timeline_name = msm_fence_get_timeline_name, .signaled = msm_fence_signaled, + .boost = msm_fence_boost, };
struct dma_fence * diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 9dd1c58430ab..c90b79116500 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -62,6 +62,10 @@ static int msm_devfreq_get_dev_status(struct device *dev, status->total_time = ktime_us_delta(time, gpu->devfreq.time); gpu->devfreq.time = time;
+ if (atomic_dec_if_positive(&gpu->devfreq.boost) >= 0) { + status->busy_time = status->total_time; + } + return 0; }
@@ -84,6 +88,15 @@ static struct devfreq_dev_profile msm_devfreq_profile = { .get_cur_freq = msm_devfreq_get_cur_freq, };
+void msm_gpu_boost(struct msm_gpu *gpu) +{ + if (!gpu->funcs->gpu_busy) + return; + + /* Add three devfreq polling intervals worth of boost: */ + atomic_add(3, &gpu->devfreq.boost); +} + static void msm_devfreq_init(struct msm_gpu *gpu) { /* We need target support to do devfreq */ diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 18baf935e143..7a082a12d98f 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -150,6 +150,7 @@ struct msm_gpu { struct devfreq *devfreq; u64 busy_cycles; ktime_t time; + atomic_t boost; } devfreq;
uint32_t suspend_count; @@ -295,6 +296,7 @@ static inline void gpu_write64(struct msm_gpu *gpu, u32 lo, u32 hi, u64 val) int msm_gpu_pm_suspend(struct msm_gpu *gpu); int msm_gpu_pm_resume(struct msm_gpu *gpu); void msm_gpu_resume_devfreq(struct msm_gpu *gpu); +void msm_gpu_boost(struct msm_gpu *gpu);
int msm_gpu_hw_init(struct msm_gpu *gpu);
linaro-mm-sig@lists.linaro.org