Commit a408e4a86b36 ("ima: open a new file instance if no read permissions") tries to create a new file descriptor to calculate a file digest if the file has not been opened with O_RDONLY flag. However, if a new file descriptor cannot be obtained, it sets the FMODE_READ flag to file->f_flags instead of file->f_mode.
This patch fixes this issue by replacing f_flags with f_mode as it was before that commit.
Cc: stable@vger.kernel.org # 4.20.x Fixes: a408e4a86b36 ("ima: open a new file instance if no read permissions") Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- security/integrity/ima/ima_crypto.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index 423c84f95a14..8ab17aa867dd 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -436,7 +436,7 @@ int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash) */ pr_info_ratelimited("Unable to reopen file for reading.\n"); f = file; - f->f_flags |= FMODE_READ; + f->f_mode |= FMODE_READ; modified_flags = true; } else { new_file_instance = true; @@ -456,7 +456,7 @@ int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash) if (new_file_instance) fput(f); else if (modified_flags) - f->f_flags &= ~FMODE_READ; + f->f_mode &= ~FMODE_READ; return rc; }
The mutex in init_desc(), introduced by commit 97426f985729 ("evm: prevent racing during tfm allocation") prevents two tasks to concurrently set *tfm. However, checking if *tfm is NULL is not enough, as crypto_alloc_shash() can return an error pointer. The following sequence can happen:
Task A: *tfm = crypto_alloc_shash() <= error pointer Task B: if (*tfm == NULL) <= *tfm is not NULL, use it Task B: rc = crypto_shash_init(desc) <= panic Task A: *tfm = NULL
This patch uses the IS_ERR_OR_NULL macro to determine whether or not a new crypto context must be created.
Cc: stable@vger.kernel.org Fixes: 97426f985729 ("evm: prevent racing during tfm allocation") Co-developed-by: Krzysztof Struczynski krzysztof.struczynski@huawei.com Signed-off-by: Krzysztof Struczynski krzysztof.struczynski@huawei.com Signed-off-by: Roberto Sassu roberto.sassu@huawei.com --- security/integrity/evm/evm_crypto.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c index 35682852ddea..77ad1e5a93e4 100644 --- a/security/integrity/evm/evm_crypto.c +++ b/security/integrity/evm/evm_crypto.c @@ -91,7 +91,7 @@ static struct shash_desc *init_desc(char type, uint8_t hash_algo) algo = hash_algo_name[hash_algo]; }
- if (*tfm == NULL) { + if (IS_ERR_OR_NULL(*tfm)) { mutex_lock(&mutex); if (*tfm) goto out;
Hi Roberto, Krzysztof,
On Wed, 2020-03-25 at 17:11 +0100, Roberto Sassu wrote:
The mutex in init_desc(), introduced by commit 97426f985729 ("evm: prevent racing during tfm allocation") prevents two tasks to concurrently set *tfm. However, checking if *tfm is NULL is not enough, as crypto_alloc_shash() can return an error pointer. The following sequence can happen:
Task A: *tfm = crypto_alloc_shash() <= error pointer Task B: if (*tfm == NULL) <= *tfm is not NULL, use it Task B: rc = crypto_shash_init(desc) <= panic Task A: *tfm = NULL
This patch uses the IS_ERR_OR_NULL macro to determine whether or not a new crypto context must be created.
Cc: stable@vger.kernel.org Fixes: 97426f985729 ("evm: prevent racing during tfm allocation")
Thank you. True, this commit introduced the mutex, but the actual problem is most likely the result of a crypto algorithm not being configured. Depending on the kernel and which crypto algorithms are enabled, verifying an EVM signature might not be possible. In the embedded environment, where the entire filesystem is updated, there shouldn't be any unknown EVM signature algorithms.
In case Greg or Sasha decide this patch should be backported, including the context/motivation in the patch description (first paragraph) would be helpful.
Mimi
Co-developed-by: Krzysztof Struczynski krzysztof.struczynski@huawei.com Signed-off-by: Krzysztof Struczynski krzysztof.struczynski@huawei.com Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
security/integrity/evm/evm_crypto.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c index 35682852ddea..77ad1e5a93e4 100644 --- a/security/integrity/evm/evm_crypto.c +++ b/security/integrity/evm/evm_crypto.c @@ -91,7 +91,7 @@ static struct shash_desc *init_desc(char type, uint8_t hash_algo) algo = hash_algo_name[hash_algo]; }
- if (*tfm == NULL) {
- if (IS_ERR_OR_NULL(*tfm)) { mutex_lock(&mutex); if (*tfm) goto out;
HUAWEI TECHNOLOGIES Duesseldorf GmbH, HRB 56063 Managing Director: Li Peng, Li Jian, Shi Yanli
-----Original Message----- From: Mimi Zohar [mailto:zohar@linux.ibm.com] Sent: Wednesday, April 22, 2020 3:45 PM To: Roberto Sassu roberto.sassu@huawei.com Cc: linux-integrity@vger.kernel.org; linux-security-module@vger.kernel.org; linux-kernel@vger.kernel.org; Krzysztof Struczynski krzysztof.struczynski@huawei.com; Silviu Vlasceanu Silviu.Vlasceanu@huawei.com; stable@vger.kernel.org Subject: Re: [PATCH 2/5] evm: Check also if *tfm is an error pointer in init_desc()
Hi Roberto, Krzysztof,
On Wed, 2020-03-25 at 17:11 +0100, Roberto Sassu wrote:
The mutex in init_desc(), introduced by commit 97426f985729 ("evm:
prevent
racing during tfm allocation") prevents two tasks to concurrently set *tfm. However, checking if *tfm is NULL is not enough, as crypto_alloc_shash() can return an error pointer. The following sequence can happen:
Task A: *tfm = crypto_alloc_shash() <= error pointer Task B: if (*tfm == NULL) <= *tfm is not NULL, use it Task B: rc = crypto_shash_init(desc) <= panic Task A: *tfm = NULL
This patch uses the IS_ERR_OR_NULL macro to determine whether or not
a new
crypto context must be created.
Cc: stable@vger.kernel.org Fixes: 97426f985729 ("evm: prevent racing during tfm allocation")
Thank you. True, this commit introduced the mutex, but the actual problem is most likely the result of a crypto algorithm not being configured. Depending on the kernel and which crypto algorithms are enabled, verifying an EVM signature might not be possible. In the embedded environment, where the entire filesystem is updated, there shouldn't be any unknown EVM signature algorithms.
Hi Mimi
right, the actual commit that introduced the issue is:
d46eb3699502b ("evm: crypto hash replaced by shash")
In case Greg or Sasha decide this patch should be backported, including the context/motivation in the patch description (first paragraph) would be helpful.
Ok. The main motivation is to avoid kernel panic, especially if there are many files that require an unsupported hash algorithm, as it would increase the likelihood of the race condition I described.
Roberto
HUAWEI TECHNOLOGIES Duesseldorf GmbH, HRB 56063 Managing Director: Li Peng, Li Jian, Shi Yanli
Co-developed-by: Krzysztof Struczynski
krzysztof.struczynski@huawei.com
Signed-off-by: Krzysztof Struczynski krzysztof.struczynski@huawei.com Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
security/integrity/evm/evm_crypto.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/security/integrity/evm/evm_crypto.c
b/security/integrity/evm/evm_crypto.c
index 35682852ddea..77ad1e5a93e4 100644 --- a/security/integrity/evm/evm_crypto.c +++ b/security/integrity/evm/evm_crypto.c @@ -91,7 +91,7 @@ static struct shash_desc *init_desc(char type, uint8_t
hash_algo)
algo = hash_algo_name[hash_algo];
}
- if (*tfm == NULL) {
- if (IS_ERR_OR_NULL(*tfm)) { mutex_lock(&mutex); if (*tfm) goto out;
From: Krzysztof Struczynski krzysztof.struczynski@huawei.com
Function hash_long() accepts unsigned long, while currently only one byte is passed from ima_hash_key(), which calculates a key for ima_htable. Use more bytes to avoid frequent collisions.
Length of the buffer is not explicitly passed as a function parameter, because this function expects a digest whose length is greater than the size of unsigned long.
Cc: stable@vger.kernel.org Fixes: 3323eec921ef ("integrity: IMA as an integrity service provider") Signed-off-by: Krzysztof Struczynski krzysztof.struczynski@huawei.com --- security/integrity/ima/ima.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h index 64317d95363e..cf0022c2bc14 100644 --- a/security/integrity/ima/ima.h +++ b/security/integrity/ima/ima.h @@ -177,7 +177,7 @@ extern struct ima_h_table ima_htable;
static inline unsigned long ima_hash_key(u8 *digest) { - return hash_long(*digest, IMA_HASH_BITS); + return hash_long(*((unsigned long *)digest), IMA_HASH_BITS); }
#define __ima_hooks(hook) \
Hi Roberto, Krsysztof,
On Wed, 2020-03-25 at 17:11 +0100, Roberto Sassu wrote:
From: Krzysztof Struczynski krzysztof.struczynski@huawei.com
Function hash_long() accepts unsigned long, while currently only one byte is passed from ima_hash_key(), which calculates a key for ima_htable. Use more bytes to avoid frequent collisions.
Length of the buffer is not explicitly passed as a function parameter, because this function expects a digest whose length is greater than the size of unsigned long.
Somehow I missed the original report of this problem https://lore.kern el.org/patchwork/patch/674684/. This patch is definitely better, but how many unique keys are actually being used? Is it anywhere near IMA_MEASURE_HTABLE_SIZE(512)?
Do we need a new securityfs entry to display the number used?
Mimi
Cc: stable@vger.kernel.org Fixes: 3323eec921ef ("integrity: IMA as an integrity service provider") Signed-off-by: Krzysztof Struczynski krzysztof.struczynski@huawei.com
security/integrity/ima/ima.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h index 64317d95363e..cf0022c2bc14 100644 --- a/security/integrity/ima/ima.h +++ b/security/integrity/ima/ima.h @@ -177,7 +177,7 @@ extern struct ima_h_table ima_htable; static inline unsigned long ima_hash_key(u8 *digest) {
- return hash_long(*digest, IMA_HASH_BITS);
- return hash_long(*((unsigned long *)digest), IMA_HASH_BITS);
} #define __ima_hooks(hook) \
HUAWEI TECHNOLOGIES Duesseldorf GmbH, HRB 56063 Managing Director: Li Peng, Li Jian, Shi Yanli
-----Original Message----- From: Mimi Zohar [mailto:zohar@linux.ibm.com] Sent: Wednesday, April 22, 2020 10:56 PM To: Roberto Sassu roberto.sassu@huawei.com Cc: linux-integrity@vger.kernel.org; linux-security-module@vger.kernel.org; linux-kernel@vger.kernel.org; Krzysztof Struczynski krzysztof.struczynski@huawei.com; Silviu Vlasceanu Silviu.Vlasceanu@huawei.com; stable@vger.kernel.org Subject: Re: [PATCH 3/5] ima: Fix ima digest hash table key calculation
Hi Roberto, Krsysztof,
On Wed, 2020-03-25 at 17:11 +0100, Roberto Sassu wrote:
From: Krzysztof Struczynski krzysztof.struczynski@huawei.com
Function hash_long() accepts unsigned long, while currently only one byte is passed from ima_hash_key(), which calculates a key for ima_htable.
Use
more bytes to avoid frequent collisions.
Length of the buffer is not explicitly passed as a function parameter, because this function expects a digest whose length is greater than the size of unsigned long.
Somehow I missed the original report of this problem https://lore.kern el.org/patchwork/patch/674684/. This patch is definitely better, but how many unique keys are actually being used? Is it anywhere near IMA_MEASURE_HTABLE_SIZE(512)?
I did a small test (with 1043 measurements):
slots: 250, max depth: 9 (without the patch) slots: 448, max depth: 7 (with the patch)
Then, I increased the number of bits to 10:
slots: 251, max depth: 9 (without the patch) slots: 660, max depth: 4 (with the patch)
Do we need a new securityfs entry to display the number used?
Probably it is useful only if the administrator can decide the number of slots.
Roberto
HUAWEI TECHNOLOGIES Duesseldorf GmbH, HRB 56063 Managing Director: Li Peng, Li Jian, Shi Yanli
Cc: stable@vger.kernel.org Fixes: 3323eec921ef ("integrity: IMA as an integrity service provider") Signed-off-by: Krzysztof Struczynski krzysztof.struczynski@huawei.com
security/integrity/ima/ima.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h index 64317d95363e..cf0022c2bc14 100644 --- a/security/integrity/ima/ima.h +++ b/security/integrity/ima/ima.h @@ -177,7 +177,7 @@ extern struct ima_h_table ima_htable;
static inline unsigned long ima_hash_key(u8 *digest) {
- return hash_long(*digest, IMA_HASH_BITS);
- return hash_long(*((unsigned long *)digest), IMA_HASH_BITS);
}
#define __ima_hooks(hook) \
On Thu, 2020-04-23 at 10:21 +0000, Roberto Sassu wrote:
Hi Roberto, Krsysztof,
On Wed, 2020-03-25 at 17:11 +0100, Roberto Sassu wrote:
From: Krzysztof Struczynski krzysztof.struczynski@huawei.com
Function hash_long() accepts unsigned long, while currently only one byte is passed from ima_hash_key(), which calculates a key for ima_htable.
Use
more bytes to avoid frequent collisions.
Length of the buffer is not explicitly passed as a function parameter, because this function expects a digest whose length is greater than the size of unsigned long.
Somehow I missed the original report of this problem https://lore.kern el.org/patchwork/patch/674684/. This patch is definitely better, but how many unique keys are actually being used? Is it anywhere near IMA_MEASURE_HTABLE_SIZE(512)?
I did a small test (with 1043 measurements):
slots: 250, max depth: 9 (without the patch) slots: 448, max depth: 7 (with the patch)
448 out of 512 slots are used.
Then, I increased the number of bits to 10:
slots: 251, max depth: 9 (without the patch) slots: 660, max depth: 4 (with the patch)
660 out of 1024 slots are used.
I wonder if there is any benefit to hashing a digest, instead of just using the first bits.
Do we need a new securityfs entry to display the number used?
Probably it is useful only if the administrator can decide the number of slots.
The securityfs suggestion was just a means for triggering the above debugging info you provided. Could you provide another patch with the debugging info?
thanks,
Mimi
HUAWEI TECHNOLOGIES Duesseldorf GmbH, HRB 56063 Managing Director: Li Peng, Li Jian, Shi Yanli
-----Original Message----- From: Mimi Zohar [mailto:zohar@linux.ibm.com] Sent: Thursday, April 23, 2020 6:53 PM To: Roberto Sassu roberto.sassu@huawei.com Cc: linux-integrity@vger.kernel.org; linux-security-module@vger.kernel.org; linux-kernel@vger.kernel.org; Krzysztof Struczynski krzysztof.struczynski@huawei.com; Silviu Vlasceanu Silviu.Vlasceanu@huawei.com; stable@vger.kernel.org Subject: Re: [PATCH 3/5] ima: Fix ima digest hash table key calculation
On Thu, 2020-04-23 at 10:21 +0000, Roberto Sassu wrote:
Hi Roberto, Krsysztof,
On Wed, 2020-03-25 at 17:11 +0100, Roberto Sassu wrote:
From: Krzysztof Struczynski krzysztof.struczynski@huawei.com
Function hash_long() accepts unsigned long, while currently only one
byte
is passed from ima_hash_key(), which calculates a key for ima_htable.
Use
more bytes to avoid frequent collisions.
Length of the buffer is not explicitly passed as a function parameter, because this function expects a digest whose length is greater than
the
size of unsigned long.
Somehow I missed the original report of this problem https://lore.kern el.org/patchwork/patch/674684/. This patch is definitely better, but how many unique keys are actually being used? Is it anywhere near IMA_MEASURE_HTABLE_SIZE(512)?
I did a small test (with 1043 measurements):
slots: 250, max depth: 9 (without the patch) slots: 448, max depth: 7 (with the patch)
448 out of 512 slots are used.
Then, I increased the number of bits to 10:
slots: 251, max depth: 9 (without the patch) slots: 660, max depth: 4 (with the patch)
660 out of 1024 slots are used.
I wonder if there is any benefit to hashing a digest, instead of just using the first bits.
Before I calculated max depth until there is a match, not the full depth.
#1 return hash_long(*((unsigned long *)digest), IMA_HASH_BITS); #define IMA_HASH_BITS 9
Runtime measurements: 1488 Violations: 0 Slots (used/available): 484/512 Max depth hash table: 10
#2 return *(unsigned long *)digest % IMA_MEASURE_HTABLE_SIZE; #define IMA_HASH_BITS 9
Runtime measurements: 1491 Violations: 2 Slots (used/available): 489/512 Max depth hash table: 10
#3 return hash_long(*((unsigned long *)digest), IMA_HASH_BITS); #define IMA_HASH_BITS 10
Runtime measurements: 1489 Violations: 0 Slots (used/available): 780/1024 Max depth hash table: 6
#4 return *(unsigned long *)digest % IMA_MEASURE_HTABLE_SIZE; #define IMA_HASH_BITS 10
Runtime measurements: 1489 Violations: 0 Slots (used/available): 793/1024 Max depth hash table: 6
Roberto
HUAWEI TECHNOLOGIES Duesseldorf GmbH, HRB 56063 Managing Director: Li Peng, Li Jian, Shi Yanli
Do we need a new securityfs entry to display the number used?
Probably it is useful only if the administrator can decide the number of
slots.
The securityfs suggestion was just a means for triggering the above debugging info you provided. Could you provide another patch with the debugging info?
thanks,
Mimi
On Fri, 2020-04-24 at 12:18 +0000, Roberto Sassu wrote:
On Thu, 2020-04-23 at 10:21 +0000, Roberto Sassu wrote:
Hi Roberto, Krsysztof,
On Wed, 2020-03-25 at 17:11 +0100, Roberto Sassu wrote:
From: Krzysztof Struczynski krzysztof.struczynski@huawei.com
Function hash_long() accepts unsigned long, while currently only one
byte
is passed from ima_hash_key(), which calculates a key for ima_htable.
Use
more bytes to avoid frequent collisions.
Length of the buffer is not explicitly passed as a function parameter, because this function expects a digest whose length is greater than
the
size of unsigned long.
Somehow I missed the original report of this problem https://lore.kern el.org/patchwork/patch/674684/. This patch is definitely better, but how many unique keys are actually being used? Is it anywhere near IMA_MEASURE_HTABLE_SIZE(512)?
I did a small test (with 1043 measurements):
slots: 250, max depth: 9 (without the patch) slots: 448, max depth: 7 (with the patch)
448 out of 512 slots are used.
Then, I increased the number of bits to 10:
slots: 251, max depth: 9 (without the patch) slots: 660, max depth: 4 (with the patch)
660 out of 1024 slots are used.
I wonder if there is any benefit to hashing a digest, instead of just using the first bits.
Before I calculated max depth until there is a match, not the full depth.
#1 return hash_long(*((unsigned long *)digest), IMA_HASH_BITS); #define IMA_HASH_BITS 9
Runtime measurements: 1488 Violations: 0 Slots (used/available): 484/512 Max depth hash table: 10
#2 return *(unsigned long *)digest % IMA_MEASURE_HTABLE_SIZE; #define IMA_HASH_BITS 9
Runtime measurements: 1491 Violations: 2 Slots (used/available): 489/512 Max depth hash table: 10
#3 return hash_long(*((unsigned long *)digest), IMA_HASH_BITS); #define IMA_HASH_BITS 10
Runtime measurements: 1489 Violations: 0 Slots (used/available): 780/1024 Max depth hash table: 6
#4 return *(unsigned long *)digest % IMA_MEASURE_HTABLE_SIZE; #define IMA_HASH_BITS 10
Runtime measurements: 1489 Violations: 0 Slots (used/available): 793/1024 Max depth hash table: 6
At least for this measurement list sample, there doesn't seem to be any benefit to hashing the digest. In terms of increasing the number of slots, the additional memory is minimal and shouldn't negatively affect small embedded devices. Please make sure checkpatch doesn't flag it.
thanks,
Mimi
[CC'ing Goldwyn Rodrigues]
Hi Roberto,
On Wed, 2020-03-25 at 17:11 +0100, Roberto Sassu wrote:
Commit a408e4a86b36 ("ima: open a new file instance if no read permissions") tries to create a new file descriptor to calculate a file digest if the file has not been opened with O_RDONLY flag. However, if a new file descriptor cannot be obtained, it sets the FMODE_READ flag to file->f_flags instead of file->f_mode.
This patch fixes this issue by replacing f_flags with f_mode as it was before that commit.
Cc: stable@vger.kernel.org # 4.20.x Fixes: a408e4a86b36 ("ima: open a new file instance if no read permissions") Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
security/integrity/ima/ima_crypto.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index 423c84f95a14..8ab17aa867dd 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -436,7 +436,7 @@ int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash) */
Thanks, Roberto. The comment above here and the rest of the code refers to flags. Both should be updated as well to reflect using f_mode.
pr_info_ratelimited("Unable to reopen file for reading.\n"); f = file;
f->f_flags |= FMODE_READ;
f->f_mode |= FMODE_READ; modified_flags = true;
The variable should be changed to "modified_mode".
thanks,
Mimi
} else { new_file_instance = true;
@@ -456,7 +456,7 @@ int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash) if (new_file_instance) fput(f); else if (modified_flags)
f->f_flags &= ~FMODE_READ;
return rc;f->f_mode &= ~FMODE_READ;
}
-----Original Message----- From: linux-integrity-owner@vger.kernel.org [mailto:linux-integrity- owner@vger.kernel.org] On Behalf Of Mimi Zohar Sent: Wednesday, April 22, 2020 2:03 PM To: Roberto Sassu roberto.sassu@huawei.com Cc: linux-integrity@vger.kernel.org; linux-security-module@vger.kernel.org; linux-kernel@vger.kernel.org; Krzysztof Struczynski krzysztof.struczynski@huawei.com; Silviu Vlasceanu Silviu.Vlasceanu@huawei.com; stable@vger.kernel.org; Goldwyn Rodrigues rgoldwyn@suse.com Subject: Re: [PATCH 1/5] ima: Set file->f_mode instead of file->f_flags in ima_calc_file_hash()
[CC'ing Goldwyn Rodrigues]
Hi Roberto,
On Wed, 2020-03-25 at 17:11 +0100, Roberto Sassu wrote:
Commit a408e4a86b36 ("ima: open a new file instance if no read permissions") tries to create a new file descriptor to calculate a file digest if the file has not been opened with O_RDONLY flag. However, if a new file descriptor cannot be obtained, it sets the FMODE_READ flag to file->f_flags instead of file->f_mode.
This patch fixes this issue by replacing f_flags with f_mode as it was before that commit.
Cc: stable@vger.kernel.org # 4.20.x Fixes: a408e4a86b36 ("ima: open a new file instance if no read
permissions")
Signed-off-by: Roberto Sassu roberto.sassu@huawei.com
security/integrity/ima/ima_crypto.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/security/integrity/ima/ima_crypto.c
b/security/integrity/ima/ima_crypto.c
index 423c84f95a14..8ab17aa867dd 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -436,7 +436,7 @@ int ima_calc_file_hash(struct file *file, struct
ima_digest_data *hash)
*/
Thanks, Roberto. The comment above here and the rest of the code refers to flags. Both should be updated as well to reflect using f_mode.
pr_info_ratelimited("Unable to reopen file for
reading.\n");
f = file;
f->f_flags |= FMODE_READ;
f->f_mode |= FMODE_READ; modified_flags = true;
The variable should be changed to "modified_mode".
Ok. I will send a new version of the patch.
Thanks
Roberto
HUAWEI TECHNOLOGIES Duesseldorf GmbH, HRB 56063 Managing Director: Li Peng, Li Jian, Shi Yanli
} else { new_file_instance = true;
@@ -456,7 +456,7 @@ int ima_calc_file_hash(struct file *file, struct
ima_digest_data *hash)
if (new_file_instance) fput(f); else if (modified_flags)
f->f_flags &= ~FMODE_READ;
return rc;f->f_mode &= ~FMODE_READ;
}
linux-stable-mirror@lists.linaro.org