-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TA security & TrustZone switching worlds #3554
Comments
2.1) (non-secure, EL0) Client passes arguments to GNU/Linux driver (non-secure, EL1), GNU/Linux driver invokes SMC instruction (SMC Calling Convention, secure, EL3) to pass arguments to OP-TEE OS (secure, EL1), OP-TEE OS passes arguments to trusted applications (secure, EL0). That is my basic understanding of the data flow 2.2) If by root priveleges you are referring to the GNU/Linux OS, you could send arbitrary data through the provided interface, but you cannot access TrustZone secure memory or peripherals (from within non-secure world). Since OP-TEE should execute from a secure memory region, it is protected from a potentially compromised non-secure world. However, if your trusted application or OP-TEE contains vulnerabilities which you can exploit by sending specific data to the TEE, you may be able to read/write secure memory (through vulnerable trusted code) or gain elevated privileges. Hope this helps. Since my answer is surely not complete, I highly encourage further replies :) |
Hi @jforissier, As I read here and here, whith |
@21212124 Secure storage is always per-TA. In other words, a TA cannot read or write the secure storage of another TA. Different TAs always use different secure storage keys internally. |
@jforissier But a client (malicious or not) can read from different TAs, i.e. to read stored keys, right? |
As to my knowledge.... Not unless you provide this data in your TA implementation (by writing a key out to shared memory), or your secure world software contains bugs/weaknesses which are exploitable to make it expose sensitive data. Side-channel attacks are (generally) a potential problem as well, so even if your TA never exposes a secret key, other weaknesses in handling such secrets may allow an attacker to gain knowledge about them and possibly derive the secret indirectly from other information. Considering physical access to your device further complicates things, e.g., an attacker could dump the contents of external DRAM. Even if your secret is stored within a TrustZone secure memory partition, physical access to the memory would still allow reading it. For serious storage of cryptographic key material you may consider using an external, tamper-proof element (Secure Element/HSM/TPM/...).
As a rule of thumb, one should always consider clients malicious, and the TA should never return (confidential) key material to the normal world. |
Thanks so much for the explanation @Raincode |
As we can read in this paper, apparently it is possible to create a CA authentication to validate the access to the TEE. But I am not sure if it is necessary to implement another module or change some part of the TrustZone Driver to get access to the CA image. Could anyone help me in here? P.S. we cannot read all the paper with this link but the main part of the explanation about their system it is all in there. |
Interesting paper and thanks for sharing. However, I do believe that their authentication proposal will not work. It's better than having nothing, but it's still not sufficient to protect against malicious data being sent to/from a TA. One can simply run Frida.re and attach to all calls in libteec.so as well as tee-supplicant and then you as attacker can modify data in whatever way you want. I.e., this type of attack attach to existing processes, so there the authenticated CA doesn't help at all. I've been having "Frida.re attacks on OP-TEE" on my to-do list for a while. Intention is not to show "how bad" TEE implementations are at this, the idea have instead been to find out weaknesses in the current implementation and see what we can do about it. Conclusion is what we've said many times, the non-secure side (including the Linux kernel) should be consider as untrusted. |
@jbech-linaro Thanks for your answer and analysis. I did not know about Frida.re and I will have a look. Indeed I am more convinced that the more secure way is, as you just said, consider all the non-secure side as untrusted. Thank you again. |
Hi,
I have two questions:
1.- If an X client uses a TA to store keys or perform operations. Another Y client can you access those stored keys? How does OP-TEE verify that it is not an intruder? Are the stored data private for the trusted application that created them? Knowing the UUID is enough to access the TA?
I have read this #137/#3407/#3092 and I understand that it cannot be verified right?
2.- Arm documentation "ARM Security Technology Building a Secure System using TrustZone Technology" says: The mechanisms by which the physical processor can enter monitor mode from the Normal world are tightly controlled, and are all viewed as exceptions to the monitor mode software. The entry to monitor can be triggered by software executing a dedicated instruction, the Secure Monitor Call (SMC) instruction, or by a subset of the hardware exception mechanisms. The IRQ, FIQ, external Data Abort, and external Prefetch Abort exceptions can all be configured to cause the processor to switch into monitor mode.
2.1.- What control do you do? OP-TEE starts in secure mode (NS = 0) and then switches to no secure mode?
2.2.- If an attacker obtains root privileges, can he access the safe world? Is the secure monitor only protected by privilege level?
Thanks for the clarifications
The text was updated successfully, but these errors were encountered: