From: Andrey Smirnov <andrew.smirnov(a)gmail.com>
[ Upstream commit e0655feaec62d5139b6b13a7b1bbb1ab8f1c2d83 ]
According to the datasheet tc358767 can transfer up to 16 bytes via
its AUX channel, so the artificial limit of 8 appears to be too
low. However only up to 15-bytes seem to be actually supported and
trying to use 16-byte transfers results in transfers failing
sporadically (with bogus status in case of I2C transfers), so limit it
to 15.
Signed-off-by: Andrey Smirnov <andrew.smirnov(a)gmail.com>
Reviewed-by: Andrzej Hajda <a.hajda(a)samsung.com>
Reviewed-by: Tomi Valkeinen <tomi.valkeinen(a)ti.com>
Cc: Andrzej Hajda <a.hajda(a)samsung.com>
Cc: Laurent Pinchart <Laurent.pinchart(a)ideasonboard.com>
Cc: Tomi Valkeinen <tomi.valkeinen(a)ti.com>
Cc: Andrey Gusakov <andrey.gusakov(a)cogentembedded.com>
Cc: Philipp Zabel <p.zabel(a)pengutronix.de>
Cc: Cory Tusar <cory.tusar(a)zii.aero>
Cc: Chris Healy <cphealy(a)gmail.com>
Cc: Lucas Stach <l.stach(a)pengutronix.de>
Cc: dri-devel(a)lists.freedesktop.org
Cc: linux-kernel(a)vger.kernel.org
Signed-off-by: Andrzej Hajda <a.hajda(a)samsung.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190619052716.16831-9-andrew…
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
drivers/gpu/drm/bridge/tc358767.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
index 9705ca197b90d..cefa2c1685ba4 100644
--- a/drivers/gpu/drm/bridge/tc358767.c
+++ b/drivers/gpu/drm/bridge/tc358767.c
@@ -300,7 +300,7 @@ static ssize_t tc_aux_transfer(struct drm_dp_aux *aux,
struct drm_dp_aux_msg *msg)
{
struct tc_data *tc = aux_to_tc(aux);
- size_t size = min_t(size_t, 8, msg->size);
+ size_t size = min_t(size_t, DP_AUX_MAX_PAYLOAD_BYTES - 1, msg->size);
u8 request = msg->request & ~DP_AUX_I2C_MOT;
u8 *buf = msg->buffer;
u32 tmp = 0;
--
2.20.1
From: Andrey Smirnov <andrew.smirnov(a)gmail.com>
[ Upstream commit e0655feaec62d5139b6b13a7b1bbb1ab8f1c2d83 ]
According to the datasheet tc358767 can transfer up to 16 bytes via
its AUX channel, so the artificial limit of 8 appears to be too
low. However only up to 15-bytes seem to be actually supported and
trying to use 16-byte transfers results in transfers failing
sporadically (with bogus status in case of I2C transfers), so limit it
to 15.
Signed-off-by: Andrey Smirnov <andrew.smirnov(a)gmail.com>
Reviewed-by: Andrzej Hajda <a.hajda(a)samsung.com>
Reviewed-by: Tomi Valkeinen <tomi.valkeinen(a)ti.com>
Cc: Andrzej Hajda <a.hajda(a)samsung.com>
Cc: Laurent Pinchart <Laurent.pinchart(a)ideasonboard.com>
Cc: Tomi Valkeinen <tomi.valkeinen(a)ti.com>
Cc: Andrey Gusakov <andrey.gusakov(a)cogentembedded.com>
Cc: Philipp Zabel <p.zabel(a)pengutronix.de>
Cc: Cory Tusar <cory.tusar(a)zii.aero>
Cc: Chris Healy <cphealy(a)gmail.com>
Cc: Lucas Stach <l.stach(a)pengutronix.de>
Cc: dri-devel(a)lists.freedesktop.org
Cc: linux-kernel(a)vger.kernel.org
Signed-off-by: Andrzej Hajda <a.hajda(a)samsung.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190619052716.16831-9-andrew…
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
drivers/gpu/drm/bridge/tc358767.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
index aaca5248da070..d728b6cf61096 100644
--- a/drivers/gpu/drm/bridge/tc358767.c
+++ b/drivers/gpu/drm/bridge/tc358767.c
@@ -302,7 +302,7 @@ static ssize_t tc_aux_transfer(struct drm_dp_aux *aux,
struct drm_dp_aux_msg *msg)
{
struct tc_data *tc = aux_to_tc(aux);
- size_t size = min_t(size_t, 8, msg->size);
+ size_t size = min_t(size_t, DP_AUX_MAX_PAYLOAD_BYTES - 1, msg->size);
u8 request = msg->request & ~DP_AUX_I2C_MOT;
u8 *buf = msg->buffer;
u32 tmp = 0;
--
2.20.1
From: Daniel Vetter <daniel.vetter(a)ffwll.ch>
[ Upstream commit 18d0952a838ba559655b0cd9cf85097ad63d9bca ]
The issue we have is that the crc worker might fall behind. We've
tried to handle this by tracking both the earliest frame for which it
still needs to compute a crc, and the last one. Plus when the
crtc_state changes, we have a new work item, which are all run in
order due to the ordered workqueue we allocate for each vkms crtc.
Trouble is there's been a few small issues in the current code:
- we need to capture frame_end in the vblank hrtimer, not in the
worker. The worker might run much later, and then we generate a lot
of crc for which there's already a different worker queued up.
- frame number might be 0, so create a new crc_pending boolean to
track this without confusion.
- we need to atomically grab frame_start/end and clear it, so do that
all in one go. This is not going to create a new race, because if we
race with the hrtimer then our work will be re-run.
- only race that can happen is the following:
1. worker starts
2. hrtimer runs and updates frame_end
3. worker grabs frame_start/end, already reading the new frame_end,
and clears crc_pending
4. hrtimer calls queue_work()
5. worker completes
6. worker gets re-run, crc_pending is false
Explain this case a bit better by rewording the comment.
v2: Demote warning level output to debug when we fail to requeue, this
is expected under high load when the crc worker can't quite keep up.
Cc: Shayenne Moura <shayenneluzmoura(a)gmail.com>
Cc: Rodrigo Siqueira <rodrigosiqueiramelo(a)gmail.com>
Cc: Haneen Mohammed <hamohammed.sa(a)gmail.com>
Cc: Daniel Vetter <daniel(a)ffwll.ch>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Reviewed-by: Rodrigo Siqueira <rodrigosiqueiramelo(a)gmail.com>
Tested-by: Rodrigo Siqueira <rodrigosiqueiramelo(a)gmail.com>
Signed-off-by: Rodrigo Siqueira <rodrigosiqueiramelo(a)gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190606222751.32567-2-daniel…
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
drivers/gpu/drm/vkms/vkms_crc.c | 27 +++++++++++++--------------
drivers/gpu/drm/vkms/vkms_crtc.c | 9 +++++++--
drivers/gpu/drm/vkms/vkms_drv.h | 2 ++
3 files changed, 22 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/vkms/vkms_crc.c b/drivers/gpu/drm/vkms/vkms_crc.c
index d7b409a3c0f8c..66603da634fe3 100644
--- a/drivers/gpu/drm/vkms/vkms_crc.c
+++ b/drivers/gpu/drm/vkms/vkms_crc.c
@@ -166,16 +166,24 @@ void vkms_crc_work_handle(struct work_struct *work)
struct drm_plane *plane;
u32 crc32 = 0;
u64 frame_start, frame_end;
+ bool crc_pending;
unsigned long flags;
spin_lock_irqsave(&out->state_lock, flags);
frame_start = crtc_state->frame_start;
frame_end = crtc_state->frame_end;
+ crc_pending = crtc_state->crc_pending;
+ crtc_state->frame_start = 0;
+ crtc_state->frame_end = 0;
+ crtc_state->crc_pending = false;
spin_unlock_irqrestore(&out->state_lock, flags);
- /* _vblank_handle() hasn't updated frame_start yet */
- if (!frame_start || frame_start == frame_end)
- goto out;
+ /*
+ * We raced with the vblank hrtimer and previous work already computed
+ * the crc, nothing to do.
+ */
+ if (!crc_pending)
+ return;
drm_for_each_plane(plane, &vdev->drm) {
struct vkms_plane_state *vplane_state;
@@ -196,20 +204,11 @@ void vkms_crc_work_handle(struct work_struct *work)
if (primary_crc)
crc32 = _vkms_get_crc(primary_crc, cursor_crc);
- frame_end = drm_crtc_accurate_vblank_count(crtc);
-
- /* queue_work can fail to schedule crc_work; add crc for
- * missing frames
+ /*
+ * The worker can fall behind the vblank hrtimer, make sure we catch up.
*/
while (frame_start <= frame_end)
drm_crtc_add_crc_entry(crtc, true, frame_start++, &crc32);
-
-out:
- /* to avoid using the same value for frame number again */
- spin_lock_irqsave(&out->state_lock, flags);
- crtc_state->frame_end = frame_end;
- crtc_state->frame_start = 0;
- spin_unlock_irqrestore(&out->state_lock, flags);
}
static int vkms_crc_parse_source(const char *src_name, bool *enabled)
diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
index e447b7588d06e..77a1f5fa5d5c8 100644
--- a/drivers/gpu/drm/vkms/vkms_crtc.c
+++ b/drivers/gpu/drm/vkms/vkms_crtc.c
@@ -30,13 +30,18 @@ static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
* has read the data
*/
spin_lock(&output->state_lock);
- if (!state->frame_start)
+ if (!state->crc_pending)
state->frame_start = frame;
+ else
+ DRM_DEBUG_DRIVER("crc worker falling behind, frame_start: %llu, frame_end: %llu\n",
+ state->frame_start, frame);
+ state->frame_end = frame;
+ state->crc_pending = true;
spin_unlock(&output->state_lock);
ret = queue_work(output->crc_workq, &state->crc_work);
if (!ret)
- DRM_WARN("failed to queue vkms_crc_work_handle");
+ DRM_DEBUG_DRIVER("vkms_crc_work_handle already queued\n");
}
spin_unlock(&output->lock);
diff --git a/drivers/gpu/drm/vkms/vkms_drv.h b/drivers/gpu/drm/vkms/vkms_drv.h
index 81f1cfbeb9362..3c7e06b19efd5 100644
--- a/drivers/gpu/drm/vkms/vkms_drv.h
+++ b/drivers/gpu/drm/vkms/vkms_drv.h
@@ -56,6 +56,8 @@ struct vkms_plane_state {
struct vkms_crtc_state {
struct drm_crtc_state base;
struct work_struct crc_work;
+
+ bool crc_pending;
u64 frame_start;
u64 frame_end;
};
--
2.20.1
From: Daniel Vetter <daniel.vetter(a)ffwll.ch>
[ Upstream commit 18d0952a838ba559655b0cd9cf85097ad63d9bca ]
The issue we have is that the crc worker might fall behind. We've
tried to handle this by tracking both the earliest frame for which it
still needs to compute a crc, and the last one. Plus when the
crtc_state changes, we have a new work item, which are all run in
order due to the ordered workqueue we allocate for each vkms crtc.
Trouble is there's been a few small issues in the current code:
- we need to capture frame_end in the vblank hrtimer, not in the
worker. The worker might run much later, and then we generate a lot
of crc for which there's already a different worker queued up.
- frame number might be 0, so create a new crc_pending boolean to
track this without confusion.
- we need to atomically grab frame_start/end and clear it, so do that
all in one go. This is not going to create a new race, because if we
race with the hrtimer then our work will be re-run.
- only race that can happen is the following:
1. worker starts
2. hrtimer runs and updates frame_end
3. worker grabs frame_start/end, already reading the new frame_end,
and clears crc_pending
4. hrtimer calls queue_work()
5. worker completes
6. worker gets re-run, crc_pending is false
Explain this case a bit better by rewording the comment.
v2: Demote warning level output to debug when we fail to requeue, this
is expected under high load when the crc worker can't quite keep up.
Cc: Shayenne Moura <shayenneluzmoura(a)gmail.com>
Cc: Rodrigo Siqueira <rodrigosiqueiramelo(a)gmail.com>
Cc: Haneen Mohammed <hamohammed.sa(a)gmail.com>
Cc: Daniel Vetter <daniel(a)ffwll.ch>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Reviewed-by: Rodrigo Siqueira <rodrigosiqueiramelo(a)gmail.com>
Tested-by: Rodrigo Siqueira <rodrigosiqueiramelo(a)gmail.com>
Signed-off-by: Rodrigo Siqueira <rodrigosiqueiramelo(a)gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190606222751.32567-2-daniel…
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
drivers/gpu/drm/vkms/vkms_crc.c | 27 +++++++++++++--------------
drivers/gpu/drm/vkms/vkms_crtc.c | 9 +++++++--
drivers/gpu/drm/vkms/vkms_drv.h | 2 ++
3 files changed, 22 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/vkms/vkms_crc.c b/drivers/gpu/drm/vkms/vkms_crc.c
index e66ff25c008e6..e9fb4ebb789fd 100644
--- a/drivers/gpu/drm/vkms/vkms_crc.c
+++ b/drivers/gpu/drm/vkms/vkms_crc.c
@@ -166,16 +166,24 @@ void vkms_crc_work_handle(struct work_struct *work)
struct drm_plane *plane;
u32 crc32 = 0;
u64 frame_start, frame_end;
+ bool crc_pending;
unsigned long flags;
spin_lock_irqsave(&out->state_lock, flags);
frame_start = crtc_state->frame_start;
frame_end = crtc_state->frame_end;
+ crc_pending = crtc_state->crc_pending;
+ crtc_state->frame_start = 0;
+ crtc_state->frame_end = 0;
+ crtc_state->crc_pending = false;
spin_unlock_irqrestore(&out->state_lock, flags);
- /* _vblank_handle() hasn't updated frame_start yet */
- if (!frame_start || frame_start == frame_end)
- goto out;
+ /*
+ * We raced with the vblank hrtimer and previous work already computed
+ * the crc, nothing to do.
+ */
+ if (!crc_pending)
+ return;
drm_for_each_plane(plane, &vdev->drm) {
struct vkms_plane_state *vplane_state;
@@ -196,20 +204,11 @@ void vkms_crc_work_handle(struct work_struct *work)
if (primary_crc)
crc32 = _vkms_get_crc(primary_crc, cursor_crc);
- frame_end = drm_crtc_accurate_vblank_count(crtc);
-
- /* queue_work can fail to schedule crc_work; add crc for
- * missing frames
+ /*
+ * The worker can fall behind the vblank hrtimer, make sure we catch up.
*/
while (frame_start <= frame_end)
drm_crtc_add_crc_entry(crtc, true, frame_start++, &crc32);
-
-out:
- /* to avoid using the same value for frame number again */
- spin_lock_irqsave(&out->state_lock, flags);
- crtc_state->frame_end = frame_end;
- crtc_state->frame_start = 0;
- spin_unlock_irqrestore(&out->state_lock, flags);
}
static const char * const pipe_crc_sources[] = {"auto"};
diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
index 4d11292bc6f38..f392fa13015b8 100644
--- a/drivers/gpu/drm/vkms/vkms_crtc.c
+++ b/drivers/gpu/drm/vkms/vkms_crtc.c
@@ -30,13 +30,18 @@ static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
* has read the data
*/
spin_lock(&output->state_lock);
- if (!state->frame_start)
+ if (!state->crc_pending)
state->frame_start = frame;
+ else
+ DRM_DEBUG_DRIVER("crc worker falling behind, frame_start: %llu, frame_end: %llu\n",
+ state->frame_start, frame);
+ state->frame_end = frame;
+ state->crc_pending = true;
spin_unlock(&output->state_lock);
ret = queue_work(output->crc_workq, &state->crc_work);
if (!ret)
- DRM_WARN("failed to queue vkms_crc_work_handle");
+ DRM_DEBUG_DRIVER("vkms_crc_work_handle already queued\n");
}
spin_unlock(&output->lock);
diff --git a/drivers/gpu/drm/vkms/vkms_drv.h b/drivers/gpu/drm/vkms/vkms_drv.h
index b92c30c66a6f2..2b37eb1062d34 100644
--- a/drivers/gpu/drm/vkms/vkms_drv.h
+++ b/drivers/gpu/drm/vkms/vkms_drv.h
@@ -48,6 +48,8 @@ struct vkms_plane_state {
struct vkms_crtc_state {
struct drm_crtc_state base;
struct work_struct crc_work;
+
+ bool crc_pending;
u64 frame_start;
u64 frame_end;
};
--
2.20.1
Hi Greg,
This series includes some upstream patches aimed to fix multiple checksum
issues with mlx5 driver in 4.19-stable kernels.
Since the patches didn't apply cleanly to 4.19 back when they were
submitted for the first time around 5.1 kernel release to the netdev
mailing list, i couldn't mark them for -stable 4.19, so now as the issue
is being reported on 4.19 LTS kernels, I had to do the backporting and
this submission myself.
This series required some dependency patches and some manual touches
to apply some of them.
Please apply to 4.19-stable and let me know if there's any problem.
I tested and the patches apply cleanly and work on top of: v4.19.75
Thanks,
Saeed.
---
Alaa Hleihel (1):
net/mlx5e: don't set CHECKSUM_COMPLETE on SCTP packets
Cong Wang (1):
mlx5: fix get_ip_proto()
Natali Shechtman (1):
net/mlx5e: Set ECN for received packets using CQE indication
Or Gerlitz (1):
net/mlx5e: Allow reporting of checksum unnecessary
Saeed Mahameed (3):
net/mlx5e: XDP, Avoid checksum complete when XDP prog is loaded
net/mlx5e: Rx, Fixup skb checksum for packets with tail padding
net/mlx5e: Rx, Check ip headers sanity
drivers/net/ethernet/mellanox/mlx5/core/en.h | 3 +
.../ethernet/mellanox/mlx5/core/en_ethtool.c | 28 ++++
.../net/ethernet/mellanox/mlx5/core/en_main.c | 8 ++
.../net/ethernet/mellanox/mlx5/core/en_rx.c | 126 +++++++++++++++---
.../ethernet/mellanox/mlx5/core/en_stats.c | 9 ++
.../ethernet/mellanox/mlx5/core/en_stats.h | 6 +
6 files changed, 165 insertions(+), 15 deletions(-)
--
2.21.0
From: Aurelien Aptel <aaptel(a)suse.com>
Commit 7e5a70ad88b1 ("CIFS: fix deadlock in cached root handling") upstream.
Prevent deadlock between open_shroot() and
cifs_mark_open_files_invalid() by releasing the lock before entering
SMB2_open, taking it again after and checking if we still need to use
the result.
CC: <stable(a)vger.kernel.org> # v4.19+
Link: https://lore.kernel.org/linux-cifs/684ed01c-cbca-2716-bc28-b0a59a0f8521@pro…
Fixes: 3d4ef9a15343 ("smb3: fix redundant opens on root")
Signed-off-by: Aurelien Aptel <aaptel(a)suse.com>
Signed-off-by: Pavel Shilovsky <pshilov(a)microsoft.com>
---
fs/cifs/smb2ops.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 44 insertions(+)
diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
index cc9e846a3865..094be406cde4 100644
--- a/fs/cifs/smb2ops.c
+++ b/fs/cifs/smb2ops.c
@@ -553,7 +553,50 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon, struct cifs_fid *pfid)
oparams.fid = pfid;
oparams.reconnect = false;
+ /*
+ * We do not hold the lock for the open because in case
+ * SMB2_open needs to reconnect, it will end up calling
+ * cifs_mark_open_files_invalid() which takes the lock again
+ * thus causing a deadlock
+ */
+ mutex_unlock(&tcon->crfid.fid_mutex);
rc = SMB2_open(xid, &oparams, &srch_path, &oplock, NULL, NULL, NULL);
+ mutex_lock(&tcon->crfid.fid_mutex);
+
+ /*
+ * Now we need to check again as the cached root might have
+ * been successfully re-opened from a concurrent process
+ */
+
+ if (tcon->crfid.is_valid) {
+ /* work was already done */
+
+ /* stash fids for close() later */
+ struct cifs_fid fid = {
+ .persistent_fid = pfid->persistent_fid,
+ .volatile_fid = pfid->volatile_fid,
+ };
+
+ /*
+ * Caller expects this func to set pfid to a valid
+ * cached root, so we copy the existing one and get a
+ * reference
+ */
+ memcpy(pfid, tcon->crfid.fid, sizeof(*pfid));
+ kref_get(&tcon->crfid.refcount);
+
+ mutex_unlock(&tcon->crfid.fid_mutex);
+
+ if (rc == 0) {
+ /* close extra handle outside of critical section */
+ SMB2_close(xid, tcon, fid.persistent_fid,
+ fid.volatile_fid);
+ }
+ return 0;
+ }
+
+ /* Cached root is still invalid, continue normaly */
+
if (rc == 0) {
memcpy(tcon->crfid.fid, pfid, sizeof(struct cifs_fid));
tcon->crfid.tcon = tcon;
@@ -561,6 +604,7 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon, struct cifs_fid *pfid)
kref_init(&tcon->crfid.refcount);
kref_get(&tcon->crfid.refcount);
}
+
mutex_unlock(&tcon->crfid.fid_mutex);
return rc;
}
--
2.17.1
Hole puching currently evicts pages from page cache and then goes on to
remove blocks from the inode. This happens under both XFS_IOLOCK_EXCL
and XFS_MMAPLOCK_EXCL which provides appropriate serialization with
racing reads or page faults. However there is currently nothing that
prevents readahead triggered by fadvise() or madvise() from racing with
the hole punch and instantiating page cache page after hole punching has
evicted page cache in xfs_flush_unmap_range() but before it has removed
blocks from the inode. This page cache page will be mapping soon to be
freed block and that can lead to returning stale data to userspace or
even filesystem corruption.
Fix the problem by protecting handling of readahead requests by
XFS_IOLOCK_SHARED similarly as we protect reads.
CC: stable(a)vger.kernel.org
Link: https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmah…
Reported-by: Amir Goldstein <amir73il(a)gmail.com>
Signed-off-by: Jan Kara <jack(a)suse.cz>
---
fs/xfs/xfs_file.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 28101bbc0b78..d952d5962e93 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -28,6 +28,7 @@
#include <linux/falloc.h>
#include <linux/backing-dev.h>
#include <linux/mman.h>
+#include <linux/fadvise.h>
static const struct vm_operations_struct xfs_file_vm_ops;
@@ -933,6 +934,30 @@ xfs_file_fallocate(
return error;
}
+STATIC int
+xfs_file_fadvise(
+ struct file *file,
+ loff_t start,
+ loff_t end,
+ int advice)
+{
+ struct xfs_inode *ip = XFS_I(file_inode(file));
+ int ret;
+ int lockflags = 0;
+
+ /*
+ * Operations creating pages in page cache need protection from hole
+ * punching and similar ops
+ */
+ if (advice == POSIX_FADV_WILLNEED) {
+ lockflags = XFS_IOLOCK_SHARED;
+ xfs_ilock(ip, lockflags);
+ }
+ ret = generic_fadvise(file, start, end, advice);
+ if (lockflags)
+ xfs_iunlock(ip, lockflags);
+ return ret;
+}
STATIC loff_t
xfs_file_remap_range(
@@ -1232,6 +1257,7 @@ const struct file_operations xfs_file_operations = {
.fsync = xfs_file_fsync,
.get_unmapped_area = thp_get_unmapped_area,
.fallocate = xfs_file_fallocate,
+ .fadvise = xfs_file_fadvise,
.remap_file_range = xfs_file_remap_range,
};
--
2.16.4