Please let me know the status of your photos, waiting to retouching them.
Do your photos need white background? sharpen? retouching? We can do it for
you.
Send us the test photos today and we can start to work on them.
Thanks,
Denna
Please let me know the status of your photos, waiting to retouching them.
Do your photos need white background? sharpen? retouching? We can do it for
you.
Send us the test photos today and we can start to work on them.
Thanks,
Denna
On Wed, Jan 23, 2019 at 2:57 PM Sasha Levin <sashal(a)kernel.org> wrote:
>
> Hi,
>
> [This is an automated email]
>
> This commit has been processed because it contains a "Fixes:" tag,
> fixing commit: 6b0c81b3be11 mm, oom: reduce dependency on tasklist_lock.
>
> The bot has tested the following trees: v4.20.3, v4.19.16, v4.14.94, v4.9.151, v4.4.171, v3.18.132.
>
> v4.20.3: Build OK!
> v4.19.16: Build OK!
> v4.14.94: Failed to apply! Possible dependencies:
> 5989ad7b5ede ("mm, oom: refactor oom_kill_process()")
>
Very easy to resolve the conflict. Please let me know if you want me
to send a version for 4.14-stable kernel.
> v4.9.151: Build OK!
> v4.4.171: Build OK!
> v3.18.132: Build OK!
>
>
> How should we proceed with this patch?
>
We do want to backport this patch to stable kernels. However shouldn't
we wait for this patch to be applied to Linus's tree first.
Shakeel
Hi Sasha,
Sorry for annoying, please ignore this patch since it has not been
sent to linux-staging mailing list.
On 2019/1/24 6:57, Sasha Levin wrote:
> Hi,
>
> [This is an automated email]
>
> This commit has been processed because it contains a "Fixes:" tag,
> fixing commit: 3883a79abd02 staging: erofs: introduce VLE decompression support.
>
> The bot has tested the following trees: v4.20.3, v4.19.16.
>
> v4.20.3: Build OK!
> v4.19.16: Failed to apply! Possible dependencies:
> ab47dd2b0819 ("staging: erofs: cleanup z_erofs_vle_work_{lookup, register}")
>
Anyway, the corresponding 4.19 patch is attached in the end.
Thanks,
Gao Xiang
>
> How should we proceed with this patch?
>
> --
> Thanks,
> Sasha
>
>From 6982dff48665dae0a6ed77ed2abef476e2d01488 Mon Sep 17 00:00:00 2001
From: Gao Xiang <gaoxiang25(a)huawei.com>
Date: Tue, 22 Jan 2019 17:29:47 +0800
Subject: [PATCH 4.19] staging: erofs: fix mis-acted TAIL merging behavior
EROFS has an optimized path called TAIL merging, which is designed
to merge multiple reads and the corresponding decompressions into
one if these requests read continuous pages almost at the same time.
In general, it behaves as follows:
________________________________________________________________
... | TAIL . HEAD | PAGE | PAGE | TAIL . HEAD | ...
_____|_combined page A_|________|________|_combined page B_|____
1 ] -> [ 2 ] -> [ 3
If the above three reads are requested in the order 1-2-3, it will
generate a large work chain rather than 3 individual work chains
to reduce scheduling overhead and boost up sequential read.
However, if Read 2 is processed slightly earlier than Read 1,
currently it still generates 2 individual work chains (chain 1, 2)
but it does in-place decompression for combined page A, moreover,
if chain 2 decompresses ahead of chain 1, it will be a race and
lead to corrupted decompressed page. This patch fixes it.
Fixes: 3883a79abd02 ("staging: erofs: introduce VLE decompression support")
Cc: <stable(a)vger.kernel.org> # 4.19+
Signed-off-by: Gao Xiang <gaoxiang25(a)huawei.com>
---
drivers/staging/erofs/unzip_vle.c | 69 +++++++++++++++++++++++++--------------
1 file changed, 44 insertions(+), 25 deletions(-)
diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
index 8721f0a41d15..05aa5d3a2e82 100644
--- a/drivers/staging/erofs/unzip_vle.c
+++ b/drivers/staging/erofs/unzip_vle.c
@@ -60,15 +60,30 @@ enum z_erofs_vle_work_role {
Z_EROFS_VLE_WORK_SECONDARY,
Z_EROFS_VLE_WORK_PRIMARY,
/*
- * The current work has at least been linked with the following
- * processed chained works, which means if the processing page
- * is the tail partial page of the work, the current work can
- * safely use the whole page, as illustrated below:
- * +--------------+-------------------------------------------+
- * | tail page | head page (of the previous work) |
- * +--------------+-------------------------------------------+
- * /\ which belongs to the current work
- * [ (*) this page can be used for the current work itself. ]
+ * The current work was the tail of an exist chain, but the following
+ * processed chained works are now all hooked up to it.
+ * A new chain should be created for the remaining unprocessed works,
+ * therefore different from Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED,
+ * the next work cannot reuse the whole page in the following scenario:
+ * ________________________________________________________________
+ * | tail (partial) page | head (partial) page |
+ * | (belongs to the next work) | (belongs to the current work) |
+ * |_______PRIMARY_FOLLOWED_______|________PRIMARY_HOOKED___________|
+ */
+ Z_EROFS_VLE_WORK_PRIMARY_HOOKED,
+ /*
+ * The current work has been linked with the processed chained works,
+ * and could be also linked with the potential remaining works, which
+ * means if the processing page is the tail partial page of the work,
+ * the current work can safely use the whole page (since the next work
+ * is under control) for in-place decompression, as illustrated below:
+ * ________________________________________________________________
+ * | tail (partial) page | head (partial) page |
+ * | (of the current work) | (of the previous work) |
+ * | PRIMARY_FOLLOWED or | |
+ * |_____PRIMARY_HOOKED____|____________PRIMARY_FOLLOWED____________|
+ *
+ * [ (*) the above page can be used for the current work itself. ]
*/
Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED,
Z_EROFS_VLE_WORK_MAX
@@ -237,10 +252,10 @@ static int z_erofs_vle_work_add_page(
return ret ? 0 : -EAGAIN;
}
-static inline bool try_to_claim_workgroup(
- struct z_erofs_vle_workgroup *grp,
- z_erofs_vle_owned_workgrp_t *owned_head,
- bool *hosted)
+static enum z_erofs_vle_work_role
+try_to_claim_workgroup(struct z_erofs_vle_workgroup *grp,
+ z_erofs_vle_owned_workgrp_t *owned_head,
+ bool *hosted)
{
DBG_BUGON(*hosted == true);
@@ -254,6 +269,9 @@ static inline bool try_to_claim_workgroup(
*owned_head = grp;
*hosted = true;
+ /* lucky, I am the followee :) */
+ return Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED;
+
} else if (grp->next == Z_EROFS_VLE_WORKGRP_TAIL) {
/*
* type 2, link to the end of a existing open chain,
@@ -263,12 +281,11 @@ static inline bool try_to_claim_workgroup(
if (Z_EROFS_VLE_WORKGRP_TAIL != cmpxchg(&grp->next,
Z_EROFS_VLE_WORKGRP_TAIL, *owned_head))
goto retry;
-
*owned_head = Z_EROFS_VLE_WORKGRP_TAIL;
- } else
- return false; /* :( better luck next time */
+ return Z_EROFS_VLE_WORK_PRIMARY_HOOKED;
+ }
- return true; /* lucky, I am the followee :) */
+ return Z_EROFS_VLE_WORK_PRIMARY; /* :( better luck next time */
}
static struct z_erofs_vle_work *
@@ -343,12 +360,8 @@ z_erofs_vle_work_lookup(struct super_block *sb,
*hosted = false;
if (!primary)
*role = Z_EROFS_VLE_WORK_SECONDARY;
- /* claim the workgroup if possible */
- else if (try_to_claim_workgroup(grp, owned_head, hosted))
- *role = Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED;
- else
- *role = Z_EROFS_VLE_WORK_PRIMARY;
-
+ else /* claim the workgroup if possible */
+ *role = try_to_claim_workgroup(grp, owned_head, hosted);
return work;
}
@@ -431,6 +444,9 @@ static inline void __update_workgrp_llen(struct z_erofs_vle_workgroup *grp,
}
}
+#define builder_is_hooked(builder) \
+ ((builder)->role >= Z_EROFS_VLE_WORK_PRIMARY_HOOKED)
+
#define builder_is_followed(builder) \
((builder)->role >= Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED)
@@ -595,7 +611,7 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
struct z_erofs_vle_work_builder *const builder = &fe->builder;
const loff_t offset = page_offset(page);
- bool tight = builder_is_followed(builder);
+ bool tight = builder_is_hooked(builder);
struct z_erofs_vle_work *work = builder->work;
#ifdef EROFS_FS_HAS_MANAGED_CACHE
@@ -659,7 +675,7 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
builder->role = Z_EROFS_VLE_WORK_PRIMARY;
#endif
- tight &= builder_is_followed(builder);
+ tight &= builder_is_hooked(builder);
work = builder->work;
hitted:
cur = end - min_t(unsigned, offset + end - map->m_la, end);
@@ -674,6 +690,9 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
(tight ? Z_EROFS_PAGE_TYPE_EXCLUSIVE :
Z_EROFS_VLE_PAGE_TYPE_TAIL_SHARED));
+ if (cur)
+ tight &= builder_is_followed(builder);
+
retry:
err = z_erofs_vle_work_add_page(builder, page, page_type);
/* should allocate an additional staging page for pagevec */
--
2.14.4
Some time ago blk_execute_rq() was modified such that it no longer
allocates a sense buffer. Make sg_io() allocate and use a sense buffer.
This patch avoids that the following bug is triggered when running the
libiscsi tests against the scsi_debug driver:
usercopy: Kernel memory exposure attempt detected from null address (offset 0, size 18)!
------------[ cut here ]------------
kernel BUG at mm/usercopy.c:102!
CPU: 5 PID: 693 Comm: iscsi-test-cu Not tainted 5.0.0-rc3-dbg+ #3
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
RIP: 0010:usercopy_abort+0x7a/0x7c
Call Trace:
__check_object_size.cold.1+0x37/0x3d
sg_io+0x5a2/0x700
scsi_cmd_ioctl+0x4d4/0x540
scsi_cmd_blk_ioctl+0x7b/0x8b
sd_ioctl+0xba/0x150
blkdev_ioctl+0x6e1/0xea0
block_ioctl+0x79/0x90
do_vfs_ioctl+0x12b/0x9b0
ksys_ioctl+0x41/0x80
__x64_sys_ioctl+0x43/0x50
do_syscall_64+0x71/0x210
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Cc: Christoph Hellwig <hch(a)lst.de>
Cc: Martin K. Petersen <martin.petersen(a)oracle.com>
Cc: Douglas Gilbert <dgilbert(a)interlog.com>
Cc: <stable(a)vger.kernel.org> # v4.11+
Fixes: 82ed4db499b8 ("block: split scsi_request out of struct request")
Signed-off-by: Bart Van Assche <bvanassche(a)acm.org>
---
block/scsi_ioctl.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c
index 533f4aee8567..066929ec0d61 100644
--- a/block/scsi_ioctl.c
+++ b/block/scsi_ioctl.c
@@ -299,6 +299,7 @@ static int sg_io(struct request_queue *q, struct gendisk *bd_disk,
struct request *rq;
struct scsi_request *req;
struct bio *bio;
+ u8 sense[SCSI_SENSE_BUFFERSIZE];
if (hdr->interface_id != 'S')
return -EINVAL;
@@ -361,6 +362,7 @@ static int sg_io(struct request_queue *q, struct gendisk *bd_disk,
bio = rq->bio;
req->retries = 0;
+ req->sense = sense;
start_time = jiffies;
--
2.20.1.321.g9e740568ce-goog
Since blk_execute_rq() no longer allocates a sense buffer and no longer
initializes the sense pointer the callers of blk_execute_rq() have to do
initialize the sense pointer. Hence this patch that initializes rq->sense
and that removes a superfluous memcpy() statement.
Cc: Christoph Hellwig <hch(a)lst.de>
Cc: Douglas Gilbert <dgilbert(a)interlog.com>
Cc: <stable(a)vger.kernel.org> # v4.11+
Fixes: 82ed4db499b8 ("block: split scsi_request out of struct request")
Signed-off-by: Bart Van Assche <bvanassche(a)acm.org>
---
drivers/scsi/scsi_lib.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 4feba3b5aff1..8b9f4b1bca35 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -271,6 +271,7 @@ int __scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
rq->cmd_len = COMMAND_SIZE(cmd[0]);
memcpy(rq->cmd, cmd, rq->cmd_len);
rq->retries = retries;
+ rq->sense = sense;
req->timeout = timeout;
req->cmd_flags |= flags;
req->rq_flags |= rq_flags | RQF_QUIET;
@@ -291,8 +292,6 @@ int __scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
if (resid)
*resid = rq->resid_len;
- if (sense && rq->sense_len)
- memcpy(sense, rq->sense, SCSI_SENSE_BUFFERSIZE);
if (sshdr)
scsi_normalize_sense(rq->sense, rq->sense_len, sshdr);
ret = rq->result;
--
2.20.1.321.g9e740568ce-goog
From: Eric Biggers <ebiggers(a)google.com>
Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and
"michael_mic", fail the improved hash tests because they sometimes
produce the wrong digest. The bug is that in the case where a
scatterlist element crosses pages, not all the data is actually hashed
because the scatterlist walk terminates too early. This happens because
the 'nbytes' variable in crypto_hash_walk_done() is assigned the number
of bytes remaining in the page, then later interpreted as the number of
bytes remaining in the scatterlist element. Fix it.
Fixes: 900a081f6912 ("crypto: ahash - Fix early termination in hash walk")
Cc: stable(a)vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
---
crypto/ahash.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/crypto/ahash.c b/crypto/ahash.c
index ca0d3e281fef..81e2767e2164 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -86,17 +86,17 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
{
unsigned int alignmask = walk->alignmask;
- unsigned int nbytes = walk->entrylen;
walk->data -= walk->offset;
- if (nbytes && walk->offset & alignmask && !err) {
- walk->offset = ALIGN(walk->offset, alignmask + 1);
- nbytes = min(nbytes,
- ((unsigned int)(PAGE_SIZE)) - walk->offset);
- walk->entrylen -= nbytes;
+ if (walk->entrylen && (walk->offset & alignmask) && !err) {
+ unsigned int nbytes;
+ walk->offset = ALIGN(walk->offset, alignmask + 1);
+ nbytes = min(walk->entrylen,
+ (unsigned int)(PAGE_SIZE - walk->offset));
if (nbytes) {
+ walk->entrylen -= nbytes;
walk->data += walk->offset;
return nbytes;
}
@@ -116,7 +116,7 @@ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
if (err)
return err;
- if (nbytes) {
+ if (walk->entrylen) {
walk->offset = 0;
walk->pg++;
return hash_walk_next(walk);
--
2.20.1.321.g9e740568ce-goog
From: Eric Biggers <ebiggers(a)google.com>
gcmaes_crypt_by_sg() dereferences the NULL pointer returned by
scatterwalk_ffwd() when encrypting an empty plaintext and the source
scatterlist ends immediately after the associated data.
Fix it by only fast-forwarding to the src/dst data scatterlists if the
data length is nonzero.
This bug is reproduced by the "rfc4543(gcm(aes))" test vectors when run
with the new AEAD test manager.
Fixes: e845520707f8 ("crypto: aesni - Update aesni-intel_glue to use scatter/gather")
Cc: <stable(a)vger.kernel.org> # v4.17+
Cc: Dave Watson <davejwatson(a)fb.com>
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
---
arch/x86/crypto/aesni-intel_glue.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 9b5ccde3ef31..1e3d2102033a 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -813,11 +813,14 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
scatterwalk_map_and_copy(assoc, req->src, 0, assoclen, 0);
}
- src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen);
- scatterwalk_start(&src_sg_walk, src_sg);
- if (req->src != req->dst) {
- dst_sg = scatterwalk_ffwd(dst_start, req->dst, req->assoclen);
- scatterwalk_start(&dst_sg_walk, dst_sg);
+ if (left) {
+ src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen);
+ scatterwalk_start(&src_sg_walk, src_sg);
+ if (req->src != req->dst) {
+ dst_sg = scatterwalk_ffwd(dst_start, req->dst,
+ req->assoclen);
+ scatterwalk_start(&dst_sg_walk, dst_sg);
+ }
}
kernel_fpu_begin();
--
2.20.1.321.g9e740568ce-goog