On Mon, 31 Jul 2023 at 03:53, Jarkko Sakkinen jarkko@kernel.org wrote:
I quickly carved up a patch (attached), which is only compile tested because I do not have any AMD hardware at hand.
Is there some way to just see "this is a fTPM"?
Because honestly, even if AMD is the one that has had stuttering issues, the bigger argument is that there is simply no _point_ in supporting randomness from a firmware source.
There is no way anybody should believe that a firmware TPM generates better randomness than we do natively.
And there are many reasons to _not_ believe it. The AMD problem is just the most user-visible one.
Now, I'm not saying that a fTPM needs to be disabled in general - but I really feel like we should just do
static int tpm_add_hwrng(struct tpm_chip *chip) { if (!IS_ENABLED(CONFIG_HW_RANDOM_TPM)) return 0; // If it's not hardware, don't treat it as such if (tpm_is_fTPM(chip)) return 0; [...]
and be done with it.
But hey, if we have no way to see that whole "this is firmware emulation", then just blocking AMD might be the only way.
Linus
On 7/31/2023 2:05 PM, Linus Torvalds wrote:
On Mon, 31 Jul 2023 at 03:53, Jarkko Sakkinen jarkko@kernel.org wrote:
I quickly carved up a patch (attached), which is only compile tested because I do not have any AMD hardware at hand.
Is there some way to just see "this is a fTPM"?
How many fTPM implementations are there? We're talking like less than 5 right? Maybe just check against a static list when calling tpm_add_hwrng().
Because honestly, even if AMD is the one that has had stuttering issues, the bigger argument is that there is simply no _point_ in supporting randomness from a firmware source.
I've had some discussions today with a variety of people on this problem and there is no advantage to get RNG through the fTPM over RDRAND.
They both source the exact same hardware IP, but RDRAND is a *lot* more direct.
There is no way anybody should believe that a firmware TPM generates better randomness than we do natively.
And there are many reasons to _not_ believe it. The AMD problem is just the most user-visible one.
Now, I'm not saying that a fTPM needs to be disabled in general - but I really feel like we should just do
static int tpm_add_hwrng(struct tpm_chip *chip) { if (!IS_ENABLED(CONFIG_HW_RANDOM_TPM)) return 0; // If it's not hardware, don't treat it as such if (tpm_is_fTPM(chip)) return 0; [...]
and be done with it.
But hey, if we have no way to see that whole "this is firmware emulation", then just blocking AMD might be the only way.
Linus
On Mon, 31 Jul 2023 at 12:18, Limonciello, Mario mario.limonciello@amd.com wrote:
Is there some way to just see "this is a fTPM"?
How many fTPM implementations are there? We're talking like less than 5 right? Maybe just check against a static list when calling tpm_add_hwrng().
Sounds sane. But I was hoping for some direct way to just query "are you a firmware SMI hook, or real hardware".
It would be lovely to avoid the list, because maybe AMD does - or in the past have done - discrete TPM hardware? So it might not be as easy as just checking against the manufacturer..
That said, maybe it really doesn't matter. I'm perfectly fine with just the "check for AMD as a manufacturer" too.
In fact, I'd be perfectly happy with not using the TPM for run-time randomness at all, and purely doing it for the bootup entropy, which is where I feel it matters a lot m ore.
I've had some discussions today with a variety of people on this problem and there is no advantage to get RNG through the fTPM over RDRAND.
Ack.
And that's true even if you _trust_ the fTPM.
That said, I see no real downside to using the TPM (whether firmware or discrete) to just add to the boot-time "we'll gather entropy for our random number generator from any source".
So it's purely the runtime randomness where I feel that the upside just isn't there, and the downsides are real.
Linus
On 7/31/2023 2:30 PM, Linus Torvalds wrote:
On Mon, 31 Jul 2023 at 12:18, Limonciello, Mario mario.limonciello@amd.com wrote:
Is there some way to just see "this is a fTPM"?
How many fTPM implementations are there? We're talking like less than 5 right? Maybe just check against a static list when calling tpm_add_hwrng().
Sounds sane. But I was hoping for some direct way to just query "are you a firmware SMI hook, or real hardware".
It would be lovely to avoid the list, because maybe AMD does - or in the past have done - discrete TPM hardware? So it might not be as easy as just checking against the manufacturer..
That said, maybe it really doesn't matter. I'm perfectly fine with just the "check for AMD as a manufacturer" too.
Jarko's patch seems conceptually fine for now for the fire of the day if that's the consensus on the direction for this.
In fact, I'd be perfectly happy with not using the TPM for run-time randomness at all, and purely doing it for the bootup entropy, which is where I feel it matters a lot m ore.
I've had some discussions today with a variety of people on this problem and there is no advantage to get RNG through the fTPM over RDRAND.
Ack.
And that's true even if you _trust_ the fTPM.
That said, I see no real downside to using the TPM (whether firmware or discrete) to just add to the boot-time "we'll gather entropy for our random number generator from any source".
So it's purely the runtime randomness where I feel that the upside just isn't there, and the downsides are real.
Linus
Are you thinking then to unregister the tpm hwrng "sometime" after boot?
What would be the right timing/event for this? Maybe rootfs_initcall?
On Mon, 31 Jul 2023 at 14:57, Limonciello, Mario mario.limonciello@amd.com wrote:
Are you thinking then to unregister the tpm hwrng "sometime" after boot?
No, I was more thinking that instead of registering it as a randomness source, you'd just do a one-time
tpm_get_random(..); add_hwgenerator_randomness(..);
and leave it at that.
Even if there is some stutter due to some crazy firmware implementation for reading the random data, at boot time nobody will notice it.
Linus
Hi all,
I've been tracking this issue with Mario on various threads and bugzilla for a while now. My suggestion over at bugzilla was to just disable all current AMD fTPMs by bumping the check for a major version number, so that the hardware people can reenable it i it's ever fixed, but only if this is something that the hardware people would actually respect. As I understand it, Mario was going to check into it and see. Failing that, yea, just disabling hwrng on fTPM seems like a fine enough thing to do.
The reason I'm not too concerned about that is twofold: - Systems with fTPM all have RDRAND anyway, so there's no entropy problem. - fTPM *probably* uses the same random source as RDRAND -- the TRNG_OUT MMIO register -- so it's not really doing much more than what we already have available.
So this all seems fine. And Jarkko's patch seems more or less the straight forward way of disabling it. But with that said, in order of priority, maybe we should first try these:
1) Adjust the version check to a major-place fTPM version that AMD's hardware team pinky swears will have this bug fixed. (Though, I can already imagine somebody on the list shouting, "we don't trust hardware teams to do anything with unreleased stuff!", which could be valid.) 2) Remove the version check, but add some other query to detect AMD fTPM vs realTPM, and ban fTPM. - Remove the version check, and just check for AMD; this is Jarrko's patch.
Mario will know best the feasibility of (1) and (2).
Jason
On 7/31/23 18:40, Jason A. Donenfeld wrote:
Hi all,
I've been tracking this issue with Mario on various threads and bugzilla for a while now. My suggestion over at bugzilla was to just disable all current AMD fTPMs by bumping the check for a major version number, so that the hardware people can reenable it i it's ever fixed, but only if this is something that the hardware people would actually respect. As I understand it, Mario was going to check into it and see. Failing that, yea, just disabling hwrng on fTPM seems like a fine enough thing to do.
The reason I'm not too concerned about that is twofold:
- Systems with fTPM all have RDRAND anyway, so there's no entropy problem.
- fTPM *probably* uses the same random source as RDRAND -- the
TRNG_OUT MMIO register -- so it's not really doing much more than what we already have available.
Yeah I have conversations ongoing about this topic, but also I concluded your suspicion is correct. They both get their values from the integrated CCP HW IP.
So this all seems fine. And Jarkko's patch seems more or less the straight forward way of disabling it. But with that said, in order of priority, maybe we should first try these:
- Adjust the version check to a major-place fTPM version that AMD's
hardware team pinky swears will have this bug fixed. (Though, I can already imagine somebody on the list shouting, "we don't trust hardware teams to do anything with unreleased stuff!", which could be valid.)
I find it very likely the actual root cause is similar to what Linus suggested. If that's the case I don't think the bug can be fixed by just an fTPM fix but would rather require a BIOS fix.
This to me strengthens the argument to either not register fTPM as RNG in the first place or just use TPM for boot time entropy.
- Remove the version check, but add some other query to detect AMD
fTPM vs realTPM, and ban fTPM.
AMD doesn't make dTPMs, only fTPMs. It's tempting to try to use TPM2_PT_VENDOR_TPM_TYPE, but this actually is a vendor specific value.
I don't see a reliable way in the spec to do this.
- Remove the version check, and just check for AMD; this is Jarrko's patch.
I have a counter-proposal to Jarkko's patch attached. This has two notable changes:
1) It only disables RNG generation in the case of having RDRAND or RDSEED. 2) It also matches Intel PTT.
I still do also think Linus' idea of TPMs only providing boot time entropy is worth weighing out.
I was following the issue under or discord channel ROG for Linux and helping out some other users with it by shipping a kernel for Arch with disabled CONFIG_HW_RANDOM_TPM as the default recommend kernel for Arch for ROG laptops (as my device isn't affect by it because it is Ryzen 4800HS).
I know it was discussed here https://bugzilla.kernel.org/show_bug.cgi?id=217212#c16 against allowing the user to disable fTPM to be used as a random source via a boot time parameter but I still I disagree with it.
Linux does have a parameter `random.trust_cpu` to control the random source from CPU, why they can not be a parameter like `random.trust_ftpm` (or `random.trust_tpm`)?
It might be my limited knowledge of this topic but to me it feels like if they is a trust_cpu then Linux should have trust_ftpm too.
Mateusz
On Tue Aug 1, 2023 at 6:04 AM EEST, Mario Limonciello wrote:
On 7/31/23 18:40, Jason A. Donenfeld wrote:
Hi all,
I've been tracking this issue with Mario on various threads and bugzilla for a while now. My suggestion over at bugzilla was to just disable all current AMD fTPMs by bumping the check for a major version number, so that the hardware people can reenable it i it's ever fixed, but only if this is something that the hardware people would actually respect. As I understand it, Mario was going to check into it and see. Failing that, yea, just disabling hwrng on fTPM seems like a fine enough thing to do.
The reason I'm not too concerned about that is twofold:
- Systems with fTPM all have RDRAND anyway, so there's no entropy problem.
- fTPM *probably* uses the same random source as RDRAND -- the
TRNG_OUT MMIO register -- so it's not really doing much more than what we already have available.
Yeah I have conversations ongoing about this topic, but also I concluded your suspicion is correct. They both get their values from the integrated CCP HW IP.
So this all seems fine. And Jarkko's patch seems more or less the straight forward way of disabling it. But with that said, in order of priority, maybe we should first try these:
- Adjust the version check to a major-place fTPM version that AMD's
hardware team pinky swears will have this bug fixed. (Though, I can already imagine somebody on the list shouting, "we don't trust hardware teams to do anything with unreleased stuff!", which could be valid.)
I find it very likely the actual root cause is similar to what Linus suggested. If that's the case I don't think the bug can be fixed by just an fTPM fix but would rather require a BIOS fix.
This to me strengthens the argument to either not register fTPM as RNG in the first place or just use TPM for boot time entropy.
- Remove the version check, but add some other query to detect AMD
fTPM vs realTPM, and ban fTPM.
AMD doesn't make dTPMs, only fTPMs. It's tempting to try to use TPM2_PT_VENDOR_TPM_TYPE, but this actually is a vendor specific value.
I don't see a reliable way in the spec to do this.
- Remove the version check, and just check for AMD; this is Jarrko's patch.
I have a counter-proposal to Jarkko's patch attached. This has two notable changes:
- It only disables RNG generation in the case of having RDRAND or RDSEED.
- It also matches Intel PTT.
I still do also think Linus' idea of TPMs only providing boot time entropy is worth weighing out.
You should add something like TPM_CHIP_HWRNG_DISABLED instead and set this in tpm_crb before calling tpm_chip_register().
Nothing else concerning AMD hardware should be done in tpm-chip.c. It should only check TPM_CHIP_HWRNG_DISABLED in the beginning of tpm_add_hwrng().
BR, Jarkko
On Tue Aug 1, 2023 at 9:52 PM EEST, Jarkko Sakkinen wrote:
On Tue Aug 1, 2023 at 6:04 AM EEST, Mario Limonciello wrote:
On 7/31/23 18:40, Jason A. Donenfeld wrote:
Hi all,
I've been tracking this issue with Mario on various threads and bugzilla for a while now. My suggestion over at bugzilla was to just disable all current AMD fTPMs by bumping the check for a major version number, so that the hardware people can reenable it i it's ever fixed, but only if this is something that the hardware people would actually respect. As I understand it, Mario was going to check into it and see. Failing that, yea, just disabling hwrng on fTPM seems like a fine enough thing to do.
The reason I'm not too concerned about that is twofold:
- Systems with fTPM all have RDRAND anyway, so there's no entropy problem.
- fTPM *probably* uses the same random source as RDRAND -- the
TRNG_OUT MMIO register -- so it's not really doing much more than what we already have available.
Yeah I have conversations ongoing about this topic, but also I concluded your suspicion is correct. They both get their values from the integrated CCP HW IP.
So this all seems fine. And Jarkko's patch seems more or less the straight forward way of disabling it. But with that said, in order of priority, maybe we should first try these:
- Adjust the version check to a major-place fTPM version that AMD's
hardware team pinky swears will have this bug fixed. (Though, I can already imagine somebody on the list shouting, "we don't trust hardware teams to do anything with unreleased stuff!", which could be valid.)
I find it very likely the actual root cause is similar to what Linus suggested. If that's the case I don't think the bug can be fixed by just an fTPM fix but would rather require a BIOS fix.
This to me strengthens the argument to either not register fTPM as RNG in the first place or just use TPM for boot time entropy.
- Remove the version check, but add some other query to detect AMD
fTPM vs realTPM, and ban fTPM.
AMD doesn't make dTPMs, only fTPMs. It's tempting to try to use TPM2_PT_VENDOR_TPM_TYPE, but this actually is a vendor specific value.
I don't see a reliable way in the spec to do this.
- Remove the version check, and just check for AMD; this is Jarrko's patch.
I have a counter-proposal to Jarkko's patch attached. This has two notable changes:
- It only disables RNG generation in the case of having RDRAND or RDSEED.
- It also matches Intel PTT.
I still do also think Linus' idea of TPMs only providing boot time entropy is worth weighing out.
You should add something like TPM_CHIP_HWRNG_DISABLED instead and set this in tpm_crb before calling tpm_chip_register().
Nothing else concerning AMD hardware should be done in tpm-chip.c. It should only check TPM_CHIP_HWRNG_DISABLED in the beginning of tpm_add_hwrng().
In English: I think adding the function to tpm-chip.c was a really bad idea in the first place, so let's revert that decisions and do this correctly in tpm_crb.c.
BR, Jarkko
On Mon Jul 31, 2023 at 10:05 PM EEST, Linus Torvalds wrote:
On Mon, 31 Jul 2023 at 03:53, Jarkko Sakkinen jarkko@kernel.org wrote:
I quickly carved up a patch (attached), which is only compile tested because I do not have any AMD hardware at hand.
Is there some way to just see "this is a fTPM"?
Because honestly, even if AMD is the one that has had stuttering issues, the bigger argument is that there is simply no _point_ in supporting randomness from a firmware source.
There is no way anybody should believe that a firmware TPM generates better randomness than we do natively.
And there are many reasons to _not_ believe it. The AMD problem is just the most user-visible one.
Now, I'm not saying that a fTPM needs to be disabled in general - but I really feel like we should just do
static int tpm_add_hwrng(struct tpm_chip *chip) { if (!IS_ENABLED(CONFIG_HW_RANDOM_TPM)) return 0; // If it's not hardware, don't treat it as such if (tpm_is_fTPM(chip)) return 0; [...]
and be done with it.
But hey, if we have no way to see that whole "this is firmware emulation", then just blocking AMD might be the only way.
Linus
I would disable it inside tpm_crb driver, which is the driver used for fTPM's: they are identified by MSFT0101 ACPI identifier.
I think the right scope is still AMD because we don't have such regressions with Intel fTPM.
I.e. I would move the helper I created inside tpm_crb driver, and a new flag, let's say "TPM_CHIP_FLAG_HWRNG_DISABLED", which tpm_crb sets before calling tpm_chip_register().
Finally, tpm_add_hwrng() needs the following invariant:
if (chip->flags & TPM_CHIP_FLAG_HWRNG_DISABLED) return 0;
How does this sound? I can refine this quickly from my first trial.
BR, Jarkko
On Tue, 1 Aug 2023 at 11:28, Jarkko Sakkinen jarkko@kernel.org wrote:
I would disable it inside tpm_crb driver, which is the driver used for fTPM's: they are identified by MSFT0101 ACPI identifier.
I think the right scope is still AMD because we don't have such regressions with Intel fTPM.
I'm ok with that.
I.e. I would move the helper I created inside tpm_crb driver, and a new flag, let's say "TPM_CHIP_FLAG_HWRNG_DISABLED", which tpm_crb sets before calling tpm_chip_register().
Finally, tpm_add_hwrng() needs the following invariant:
if (chip->flags & TPM_CHIP_FLAG_HWRNG_DISABLED) return 0;
How does this sound? I can refine this quickly from my first trial.
Sounds fine.
My only worry comes from my ignorance: do these fTPM devices *always* end up being enumerated through CRB, or do they potentially look "normal enough" that you can actually end up using them even without having that CRB driver loaded?
Put another way: is the CRB driver the _only_ way they are visible, or could some people hit on this through the TPM TIS interface if they have CRB disabled?
I see, for example, that qemu ends up emulating the TIS layer, and it might end up forwarding the TPM requests to something that is natively CRB?
But again: I don't know enough about CRB vs TIS, so the above may be a stupid question.
Linus
On 8/1/2023 13:42, Linus Torvalds wrote:
On Tue, 1 Aug 2023 at 11:28, Jarkko Sakkinen jarkko@kernel.org wrote:
I would disable it inside tpm_crb driver, which is the driver used for fTPM's: they are identified by MSFT0101 ACPI identifier.
I think the right scope is still AMD because we don't have such regressions with Intel fTPM.
I'm ok with that.
I.e. I would move the helper I created inside tpm_crb driver, and a new flag, let's say "TPM_CHIP_FLAG_HWRNG_DISABLED", which tpm_crb sets before calling tpm_chip_register().
Finally, tpm_add_hwrng() needs the following invariant:
if (chip->flags & TPM_CHIP_FLAG_HWRNG_DISABLED) return 0;
How does this sound? I can refine this quickly from my first trial.
Sounds fine.
This sounds fine by me too, thanks.
My only worry comes from my ignorance: do these fTPM devices *always* end up being enumerated through CRB, or do they potentially look "normal enough" that you can actually end up using them even without having that CRB driver loaded?
Put another way: is the CRB driver the _only_ way they are visible, or could some people hit on this through the TPM TIS interface if they have CRB disabled?
I see, for example, that qemu ends up emulating the TIS layer, and it might end up forwarding the TPM requests to something that is natively CRB?
But again: I don't know enough about CRB vs TIS, so the above may be a stupid question.
Linus
On Tue Aug 1, 2023 at 9:42 PM EEST, Linus Torvalds wrote:
On Tue, 1 Aug 2023 at 11:28, Jarkko Sakkinen jarkko@kernel.org wrote:
I would disable it inside tpm_crb driver, which is the driver used for fTPM's: they are identified by MSFT0101 ACPI identifier.
I think the right scope is still AMD because we don't have such regressions with Intel fTPM.
I'm ok with that.
I.e. I would move the helper I created inside tpm_crb driver, and a new flag, let's say "TPM_CHIP_FLAG_HWRNG_DISABLED", which tpm_crb sets before calling tpm_chip_register().
Finally, tpm_add_hwrng() needs the following invariant:
if (chip->flags & TPM_CHIP_FLAG_HWRNG_DISABLED) return 0;
How does this sound? I can refine this quickly from my first trial.
Sounds fine.
Mario, it would be good if you could send a fix candidate but take my suggestion for a new TPM chip flag into account, while doing it. Please send it as a separate patch, not attachment to this thread.
I can test and ack it, if it looks reasonable.
My only worry comes from my ignorance: do these fTPM devices *always* end up being enumerated through CRB, or do they potentially look "normal enough" that you can actually end up using them even without having that CRB driver loaded?
I know that QEMU has TPM passthrough but I don't know how it behaves exactly.
Put another way: is the CRB driver the _only_ way they are visible, or could some people hit on this through the TPM TIS interface if they have CRB disabled?
I'm not aware of such implementations.
I see, for example, that qemu ends up emulating the TIS layer, and it might end up forwarding the TPM requests to something that is natively CRB?
But again: I don't know enough about CRB vs TIS, so the above may be a stupid question.
Linus
I would focus exactly what is known not to work and disable exactly that.
If someone still wants to enable TPM on such hardware, we can later on add a kernel command-line flag to enforce hwrng. This ofc based on user feedback, not something I would add right now.
BR, Jarkko
On Tue, Aug 01, 2023 at 10:09:58PM +0300, Jarkko Sakkinen wrote:
On Tue Aug 1, 2023 at 9:42 PM EEST, Linus Torvalds wrote:
On Tue, 1 Aug 2023 at 11:28, Jarkko Sakkinen jarkko@kernel.org wrote:
I would disable it inside tpm_crb driver, which is the driver used for fTPM's: they are identified by MSFT0101 ACPI identifier.
I think the right scope is still AMD because we don't have such regressions with Intel fTPM.
I'm ok with that.
I.e. I would move the helper I created inside tpm_crb driver, and a new flag, let's say "TPM_CHIP_FLAG_HWRNG_DISABLED", which tpm_crb sets before calling tpm_chip_register().
Finally, tpm_add_hwrng() needs the following invariant:
if (chip->flags & TPM_CHIP_FLAG_HWRNG_DISABLED) return 0;
How does this sound? I can refine this quickly from my first trial.
Sounds fine.
Mario, it would be good if you could send a fix candidate but take my suggestion for a new TPM chip flag into account, while doing it. Please send it as a separate patch, not attachment to this thread.
I can test and ack it, if it looks reasonable.
My only worry comes from my ignorance: do these fTPM devices *always* end up being enumerated through CRB, or do they potentially look "normal enough" that you can actually end up using them even without having that CRB driver loaded?
I know that QEMU has TPM passthrough but I don't know how it behaves exactly.
I just created a passthrough tpm device with a guest which it is using the tis driver, while the host is using crb (and apparently one of the amd devices that has an impacted fTPM). It looks like there is a complete separation between the frontend and backends, with the front end providing either a tis or crb interface to the guest, and then the backend sending commands by writing to the passthrough device that was given, such as /dev/tpm0, or an emulator such as swtpm. Stefan can probably explain it much better than I.
Regards, Jerry
Put another way: is the CRB driver the _only_ way they are visible, or could some people hit on this through the TPM TIS interface if they have CRB disabled?
I'm not aware of such implementations.
I see, for example, that qemu ends up emulating the TIS layer, and it might end up forwarding the TPM requests to something that is natively CRB?
But again: I don't know enough about CRB vs TIS, so the above may be a stupid question.
Linus
I would focus exactly what is known not to work and disable exactly that.
If someone still wants to enable TPM on such hardware, we can later on add a kernel command-line flag to enforce hwrng. This ofc based on user feedback, not something I would add right now.
BR, Jarkko
On 8/2/23 19:13, Jerry Snitselaar wrote:
On Tue, Aug 01, 2023 at 10:09:58PM +0300, Jarkko Sakkinen wrote:
On Tue Aug 1, 2023 at 9:42 PM EEST, Linus Torvalds wrote:
On Tue, 1 Aug 2023 at 11:28, Jarkko Sakkinen jarkko@kernel.org wrote:
I would disable it inside tpm_crb driver, which is the driver used for fTPM's: they are identified by MSFT0101 ACPI identifier.
I think the right scope is still AMD because we don't have such regressions with Intel fTPM.
I'm ok with that.
I.e. I would move the helper I created inside tpm_crb driver, and a new flag, let's say "TPM_CHIP_FLAG_HWRNG_DISABLED", which tpm_crb sets before calling tpm_chip_register().
Finally, tpm_add_hwrng() needs the following invariant:
if (chip->flags & TPM_CHIP_FLAG_HWRNG_DISABLED) return 0;
How does this sound? I can refine this quickly from my first trial.
Sounds fine.
Mario, it would be good if you could send a fix candidate but take my suggestion for a new TPM chip flag into account, while doing it. Please send it as a separate patch, not attachment to this thread.
I can test and ack it, if it looks reasonable.
My only worry comes from my ignorance: do these fTPM devices *always* end up being enumerated through CRB, or do they potentially look "normal enough" that you can actually end up using them even without having that CRB driver loaded?
I know that QEMU has TPM passthrough but I don't know how it behaves exactly.
I just created a passthrough tpm device with a guest which it is using the tis driver, while the host is using crb (and apparently one of the amd devices that has an impacted fTPM). It looks like there is a complete separation between the frontend and backends, with the front end providing either a tis or crb interface to the guest, and then the backend sending commands by writing to the passthrough device that was given, such as /dev/tpm0, or an emulator such as swtpm. Stefan can probably explain it much better than I.
You explained it well... The passthrough TPM is only good for one VM (if at all), and all other VMs on the same machine should use a vTPM. Even one VM sharing the TPM with the host creates a potential mess with the shared resources of the TPM, such as the state of the PCRs.
When that guest VM using the passthrough device now identifies the underlying hardware TPM's firmware version it will also take the same action to disable the TPM as a source for randomness. But then a VM with a passthrough TPM device should be rather rare...
Put another way: is the CRB driver the _only_ way they are visible, or could some people hit on this through the TPM TIS interface if they have CRB disabled?
I'm not aware of such implementations.
CRB and TIS are two distinct MMIO type of interfaces with different registers etc.
AMD could theoretically build a fTPM with a CRB interface and then another one with the same firmware and the TIS, but why would they?
Stefan
I see, for example, that qemu ends up emulating the TIS layer, and it might end up forwarding the TPM requests to something that is natively CRB?
But again: I don't know enough about CRB vs TIS, so the above may be a stupid question.
Linus
I would focus exactly what is known not to work and disable exactly that.
If someone still wants to enable TPM on such hardware, we can later on add a kernel command-line flag to enforce hwrng. This ofc based on user feedback, not something I would add right now.
BR, Jarkko
linux-stable-mirror@lists.linaro.org