Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

crypto: Add curve_key and sender_data_type columns to inboundgroupsessions table/store #3797

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 9 additions & 2 deletions crates/matrix-sdk-crypto/src/olm/group_sessions/inbound.rs
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,8 @@ use vodozemac::{
};

use super::{
BackedUpRoomKey, ExportedRoomKey, OutboundGroupSession, SenderData, SessionCreationError,
SessionKey,
BackedUpRoomKey, ExportedRoomKey, OutboundGroupSession, SenderData, SenderDataType,
SessionCreationError, SessionKey,
};
use crate::{
error::{EventError, MegolmResult},
Expand Down Expand Up @@ -477,6 +477,13 @@ impl InboundGroupSession {
pub(crate) fn mark_as_imported(&mut self) {
self.imported = true;
}

/// Return the [`SenderDataType`] of our [`SenderData`]. This is used during
/// serialization, to allow us to store the type in a separate queryable
/// column/property.
pub fn sender_data_type(&self) -> SenderDataType {
self.sender_data.to_type()
}
}

#[cfg(not(tarpaulin_include))]
Expand Down
19 changes: 9 additions & 10 deletions crates/matrix-sdk-crypto/src/olm/group_sessions/sender_data.rs
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,15 @@ impl SenderData {
pub fn legacy() -> Self {
Self::UnknownDevice { legacy_session: true, owner_check_failed: false }
}

/// Return our type: `UnknownDevice`, `DeviceInfo`, or `SenderKnown`.
pub fn to_type(&self) -> SenderDataType {
match self {
Self::UnknownDevice { .. } => SenderDataType::UnknownDevice,
Self::DeviceInfo { .. } => SenderDataType::DeviceInfo,
Self::SenderKnown { .. } => SenderDataType::SenderKnown,
}
}
}

/// Used when deserialising and the sender_data property is missing.
Expand Down Expand Up @@ -124,16 +133,6 @@ pub enum SenderDataType {
SenderKnown = 3,
}

impl From<SenderData> for SenderDataType {
fn from(value: SenderData) -> Self {
match value {
SenderData::UnknownDevice { .. } => Self::UnknownDevice,
SenderData::DeviceInfo { .. } => Self::DeviceInfo,
SenderData::SenderKnown { .. } => Self::SenderKnown,
}
}
}

#[cfg(test)]
mod tests {
use assert_matches2::assert_let;
Expand Down
202 changes: 182 additions & 20 deletions crates/matrix-sdk-indexeddb/src/crypto_store/migrations/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ use crate::{
mod old_keys;
mod v0_to_v5;
mod v10_to_v11;
mod v11_to_v12;
mod v5_to_v7;
mod v7;
mod v7_to_v8;
Expand Down Expand Up @@ -62,6 +63,40 @@ impl Drop for MigrationDb {
}
}

/// The latest version of the schema we can support. If we encounter a database
/// version with a higher schema version, we will return an error.
///
/// A note on how this works.
///
/// Normally, when you open an indexeddb database, you tell it the "schema
/// version" that you know about. If the existing database is older than
/// that, it lets you run a migration. If the existing database is newer, then
/// it assumes that there have been incompatible schema changes and complains
/// with an error ("The requested version (10) is less than the existing version
/// (11)").
///
/// The problem with this is that, if someone upgrades their installed
/// application, then realises it was a terrible mistake and tries to roll
/// back, then suddenly every user's session is completely hosed. (They see
/// an "unable to restore session" dialog.) Often, schema updates aren't
/// actually backwards-incompatible — for example, existing code will work just
/// fine if someone adds a new store or a new index — so this approach is too
/// heavy-handed.
///
/// The solution we take here is to say "any schema changes up to
/// [`MAX_SUPPORTED_SCHEMA_VERSION`] will be backwards-compatible". If, at some
/// point, we do make a breaking change, we will give that schema version a
/// higher number. Then, rather than using the implicit version check that comes
/// with `indexedDB.open(name, version)`, we explicitly check the version
/// ourselves.
///
/// It is expected that we will use version numbers that are multiples of 10 to
/// represent breaking changes — for example, version 20 is a breaking change,
/// as is version 30, but versions 21-29 are all backwards compatible with
/// version 20. In other words, if you divide by 10, you get something
/// approaching semver: version 20 is major version 2, minor version 0.
const MAX_SUPPORTED_SCHEMA_VERSION: u32 = 19;

/// Open the indexeddb with the given name, upgrading it to the latest version
/// of the schema if necessary.
pub async fn open_and_upgrade_db(
Expand All @@ -82,6 +117,16 @@ pub async fn open_and_upgrade_db(

let old_version = db_version(name).await?;

// If the database version is too new, bail out. We assume that schema updates
// all the way up to `MAX_SUPPORTED_SCHEMA_VERSION` will be
// backwards-compatible.
if old_version > MAX_SUPPORTED_SCHEMA_VERSION {
return Err(IndexeddbCryptoStoreError::SchemaTooNewError {
max_supported_version: MAX_SUPPORTED_SCHEMA_VERSION,
current_version: old_version,
});
}

if old_version < 5 {
v0_to_v5::schema_add(name).await?;
}
Expand Down Expand Up @@ -109,10 +154,22 @@ pub async fn open_and_upgrade_db(

if old_version < 11 {
v10_to_v11::data_migrate(name, serializer).await?;
v10_to_v11::schema_bump(name).await?;
}

if old_version < 12 {
v11_to_v12::schema_add(name).await?;
}

// If you add more migrations here, you'll need to update
// `tests::EXPECTED_SCHEMA_VERSION`.

// NOTE: IF YOU MAKE A BREAKING CHANGE TO THE SCHEMA, BUMP THE SCHEMA VERSION TO
// SOMETHING HIGHER THAN `MAX_SUPPORTED_SCHEMA_VERSION`! (And then bump
// `MAX_SUPPORTED_SCHEMA_VERSION` itself to the next multiple of 10).

// Open and return the DB (we know it's at the latest version)
Ok(IdbDatabase::open_u32(name, 11)?.await?)
Ok(IdbDatabase::open(name)?.await?)
}

async fn db_version(name: &str) -> Result<u32, IndexeddbCryptoStoreError> {
Expand All @@ -126,7 +183,7 @@ type OldVersion = u32;

async fn do_schema_upgrade<F>(name: &str, version: u32, f: F) -> Result<(), DomException>
where
F: Fn(&IdbDatabase, OldVersion) -> Result<(), JsValue> + 'static,
F: Fn(&IdbDatabase, IdbTransaction<'_>, OldVersion) -> Result<(), JsValue> + 'static,
{
info!("IndexeddbCryptoStore upgrade schema -> v{version} starting");
let mut db_req: OpenDbRequest = IdbDatabase::open_u32(name, version)?;
Expand All @@ -138,7 +195,7 @@ where
let old_version = evt.old_version() as u32;

// Run the upgrade code we were supplied
f(evt.db(), old_version)
f(evt.db(), evt.transaction(), old_version)
}));

let db = db_req.await?;
Expand Down Expand Up @@ -169,8 +226,9 @@ fn add_unique_index<'a>(

#[cfg(all(test, target_arch = "wasm32"))]
mod tests {
use std::{future::Future, sync::Arc};
use std::{cell::Cell, future::Future, rc::Rc, sync::Arc};

use assert_matches::assert_matches;
use gloo_utils::format::JsValueSerdeExt;
use indexed_db_futures::prelude::*;
use matrix_sdk_common::js_tracing::make_tracing_subscriber;
Expand All @@ -189,15 +247,15 @@ mod tests {

use super::{v0_to_v5, v7::InboundGroupSessionIndexedDbObject2};
use crate::{
crypto_store::{
indexeddb_serializer::MaybeEncrypted, keys, migrations::*,
InboundGroupSessionIndexedDbObject,
},
crypto_store::{keys, migrations::*, InboundGroupSessionIndexedDbObject},
IndexeddbCryptoStore,
};

wasm_bindgen_test::wasm_bindgen_test_configure!(run_in_browser);

/// The schema version we expect after we open the store.
const EXPECTED_SCHEMA_VERSION: u32 = 12;

/// Adjust this to test do a more comprehensive perf test
const NUM_RECORDS_FOR_PERF: usize = 2_000;

Expand Down Expand Up @@ -348,6 +406,8 @@ mod tests {
pickled_session: serializer.maybe_encrypt_value(pickled_session).unwrap(),
needs_backup: false,
backed_up_to: -1,
curve_key: None,
sender_data_type: None,
};
let session_js: JsValue = serde_wasm_bindgen::to_value(&session_dbo).unwrap();

Expand Down Expand Up @@ -413,22 +473,25 @@ mod tests {
/// Test migrating `inbound_group_sessions` data from store v5 to latest,
/// on a store with encryption disabled.
#[async_test]
async fn test_v8_v10_migration_unencrypted() {
test_v8_v10_migration_with_cipher("test_v8_migration_unencrypted", None).await
async fn test_v8_v10_v12_migration_unencrypted() {
test_v8_v10_v12_migration_with_cipher("test_v8_migration_unencrypted", None).await
}

/// Test migrating `inbound_group_sessions` data from store v5 to store v8,
/// on a store with encryption enabled.
#[async_test]
async fn test_v8_v10_migration_encrypted() {
async fn test_v8_v10_v12_migration_encrypted() {
let cipher = StoreCipher::new().unwrap();
test_v8_v10_migration_with_cipher("test_v8_migration_encrypted", Some(Arc::new(cipher)))
.await;
test_v8_v10_v12_migration_with_cipher(
"test_v8_migration_encrypted",
Some(Arc::new(cipher)),
)
.await;
}

/// Helper function for `test_v8_v10_migration_{un,}encrypted`: test
/// migrating `inbound_group_sessions` data from store v5 to store v10.
async fn test_v8_v10_migration_with_cipher(
/// Helper function for `test_v8_v10_v12_migration_{un,}encrypted`: test
/// migrating `inbound_group_sessions` data from store v5 to store v12.
async fn test_v8_v10_v12_migration_with_cipher(
db_prefix: &str,
store_cipher: Option<Arc<StoreCipher>>,
) {
Expand Down Expand Up @@ -472,13 +535,17 @@ mod tests {
assert!(!fetched_not_backed_up_session.backed_up());

// For v10: they have the backed_up_to property and it is indexed
assert_matches_v10_schema(db_name, store, fetched_backed_up_session).await;
assert_matches_v10_schema(&db_name, &store, &fetched_backed_up_session).await;

// For v12: they have the curve_key and sender_data_type properties and they are
// indexed
assert_matches_v12_schema(&db_name, &store, &fetched_backed_up_session).await;
}

async fn assert_matches_v10_schema(
db_name: String,
store: IndexeddbCryptoStore,
fetched_backed_up_session: InboundGroupSession,
db_name: &str,
store: &IndexeddbCryptoStore,
fetched_backed_up_session: &InboundGroupSession,
) {
let db = IdbDatabase::open(&db_name).unwrap().await.unwrap();
assert!(db.version() >= 10.0);
Expand All @@ -498,6 +565,32 @@ mod tests {
db.close();
}

async fn assert_matches_v12_schema(
db_name: &str,
store: &IndexeddbCryptoStore,
session: &InboundGroupSession,
) {
let db = IdbDatabase::open(&db_name).unwrap().await.unwrap();
assert!(db.version() >= 10.0);
let transaction = db.transaction_on_one("inbound_group_sessions3").unwrap();
let raw_store = transaction.object_store("inbound_group_sessions3").unwrap();
let key = store
.serializer
.encode_key(keys::INBOUND_GROUP_SESSIONS_V3, (session.room_id(), session.session_id()));
let idb_object: InboundGroupSessionIndexedDbObject =
serde_wasm_bindgen::from_value(raw_store.get(&key).unwrap().await.unwrap().unwrap())
.unwrap();

assert_eq!(idb_object.curve_key, None);
assert_eq!(idb_object.sender_data_type, None);
assert!(raw_store
.index_names()
.find(|idx| idx == "inbound_group_session_curve_key_sender_data_type_idx")
.is_some());

db.close();
}

fn create_sessions(room_id: &RoomId) -> (InboundGroupSession, InboundGroupSession) {
let curve_key = Curve25519PublicKey::from(&Curve25519SecretKey::new());
let ed_key = Ed25519SecretKey::new().public_key();
Expand Down Expand Up @@ -637,6 +730,75 @@ mod tests {
IdbDatabase::open_u32(name, 5)?.await
}

/// Opening a db that has been upgraded to MAX_SUPPORTED_SCHEMA_VERSION
/// should be ok
#[async_test]
async fn can_open_max_supported_schema_version() {
let _ = make_tracing_subscriber(None).try_init();

let db_prefix = "test_can_open_max_supported_schema_version";
// Create a database at MAX_SUPPORTED_SCHEMA_VERSION
create_future_schema_db(db_prefix, MAX_SUPPORTED_SCHEMA_VERSION).await;

// Now, try opening it again
IndexeddbCryptoStore::open_with_store_cipher(&db_prefix, None).await.unwrap();
}

/// Opening a db that has been upgraded beyond MAX_SUPPORTED_SCHEMA_VERSION
/// should throw an error
#[async_test]
async fn can_not_open_too_new_db() {
let _ = make_tracing_subscriber(None).try_init();

let db_prefix = "test_can_not_open_too_new_db";
// Create a database at MAX_SUPPORTED_SCHEMA_VERSION+1
create_future_schema_db(db_prefix, MAX_SUPPORTED_SCHEMA_VERSION + 1).await;

// Now, try opening it again
let result = IndexeddbCryptoStore::open_with_store_cipher(&db_prefix, None).await;
assert_matches!(
result,
Err(IndexeddbCryptoStoreError::SchemaTooNewError {
max_supported_version,
current_version
}) => {
assert_eq!(max_supported_version, MAX_SUPPORTED_SCHEMA_VERSION);
assert_eq!(current_version, MAX_SUPPORTED_SCHEMA_VERSION + 1);
}
);
}

// Create a database, and increase its schema version to the given version
// number.
async fn create_future_schema_db(db_prefix: &str, version: u32) {
let db_name = format!("{db_prefix}::matrix-sdk-crypto");

// delete the db in case it was used in a previous run
let _ = IdbDatabase::delete_by_name(&db_name);

// Open, and close, the store at the regular version.
IndexeddbCryptoStore::open_with_store_cipher(&db_prefix, None).await.unwrap();

// Now upgrade to the given version, keeping a record of the previous version so
// that we can double-check it.
let mut db_req: OpenDbRequest = IdbDatabase::open_u32(&db_name, version).unwrap();

let old_version: Rc<Cell<Option<u32>>> = Rc::new(Cell::new(None));
let old_version2 = old_version.clone();
db_req.set_on_upgrade_needed(Some(move |evt: &IdbVersionChangeEvent| {
old_version2.set(Some(evt.old_version() as u32));
Ok(())
}));

let db = db_req.await.unwrap();
assert_eq!(
old_version.get(),
Some(EXPECTED_SCHEMA_VERSION),
"Existing store had unexpected version number"
);
db.close();
}

/// Emulate the old behaviour of [`IndexeddbSerializer::serialize_value`].
///
/// We used to use an inefficient format for serializing objects in the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ use crate::crypto_store::{

/// Perform schema migrations as needed, up to schema version 5.
pub(crate) async fn schema_add(name: &str) -> Result<(), DomException> {
do_schema_upgrade(name, 5, |db, old_version| {
do_schema_upgrade(name, 5, |db, _, old_version| {
// An old_version of 1 could either mean actually the first version of the
// schema, or a completely empty schema that has been created with a
// call to `IdbDatabase::open` with no explicit "version". So, to determine
Expand Down
Loading
Loading