diff --git a/website/docs/docs/build/dimensions.md b/website/docs/docs/build/dimensions.md index d74bc773ea9..14548454332 100644 --- a/website/docs/docs/build/dimensions.md +++ b/website/docs/docs/build/dimensions.md @@ -6,20 +6,18 @@ sidebar_label: "Dimensions" tags: [Metrics, Semantic Layer] --- -Dimensions are a way to group or filter information based on categories or time. It's like a special label that helps organize and analyze data. - -In a data platform, dimensions are part of a larger structure called a semantic model. It's created along with other elements like [entities](/docs/build/entities) and [measures](/docs/build/measures) and used to add more details to your data that can't be easily added up or combined. In SQL, dimensions are typically included in the `group by` clause of your SQL query. +Dimensions represent the non-aggregatable columns in your data set, which are the attributes, features, or characteristics that describe or categorize data. In the context of the dbt Semantic Layer, dimensions are part of a larger structure called a semantic model. They are created along with other elements like [entities](/docs/build/entities) and [measures](/docs/build/measures) and used to add more details to your data. In SQL, dimensions are typically included in the `group by` clause of your SQL query. -All dimensions require a `name`, `type` and in some cases, an `expr` parameter. The `name` for your dimension must be unique to the semantic model and can not be the same as an existing `entity` or `measure` within that same model. +All dimensions require a `name`, `type`, and can optionally include an `expr` parameter. The `name` for your Dimension must be unique wihtin the same semantic model. | Parameter | Description | Type | | --------- | ----------- | ---- | | `name` | Refers to the name of the group that will be visible to the user in downstream tools. It can also serve as an alias if the column name or SQL query reference is different and provided in the `expr` parameter.

Dimension names should be unique within a semantic model, but they can be non-unique across different models as MetricFlow uses [joins](/docs/build/join-logic) to identify the right dimension. | Required | -| `type` | Specifies the type of group created in the semantic model. There are two types:

- **Categorical**: Group rows in a table by categories like geography, color, and so on.
- **Time**: Point to a date field in the data platform. Must be of type TIMESTAMP or equivalent in the data platform engine.
- You can also use time dimensions to specify time spans for [slowly changing dimensions](/docs/build/dimensions#scd-type-ii) tables. | Required | +| `type` | Specifies the type of group created in the semantic model. There are two types:

- **Categorical**: Describe attributes or features like geography or sales region.
- **Time**: Time-based dimensions like timestamps or dates. | Required | | `type_params` | Specific type params such as if the time is primary or used as a partition | Required | | `description` | A clear description of the dimension | Optional | | `expr` | Defines the underlying column or SQL query for a dimension. If no `expr` is specified, MetricFlow will use the column with the same name as the group. You can use the column name itself to input a SQL expression. | Optional | @@ -48,6 +46,8 @@ semantic_models: agg_time_dimension: order_date # --- entities --- entities: + - name: transaction + type: primary ... # --- measures --- measures: @@ -56,14 +56,20 @@ semantic_models: dimensions: - name: order_date type: time - label: "Date of transaction" # Recommend adding a label to define the value displayed in downstream tools - expr: date_trunc('day', ts) - - name: is_bulk_transaction + type_params: + time_granularity: day + label: "Date of transaction" # Recommend adding a label to provide more context to users consuming the data + expr: ts + - name: is_bulk type: categorical expr: case when quantity > 10 then true else false end + - name: type + type: categorical ``` -MetricFlow requires that all dimensions have a primary entity. This is to guarantee unique dimension names. If your data source doesn't have a primary entity, you need to assign the entity a name using the `primary_entity: entity_name` key. It doesn't necessarily have to map to a column in that table and assigning the name doesn't affect query generation. +Dimensions are bound to the primary entity of the semantic model they are defined in. For example the dimensoin `type` is defined in a model that has `transaction` as a primary entity. `type` is scoped to the `transaction` entity, and to reference this dimension you would use the fully qualified dimension name i.e `transaction__type`. + +MetricFlow requires that all semantic models have a primary entity. This is to guarantee unique dimension names. If your data source doesn't have a primary entity, you need to assign the entity a name using the `primary_entity` key. It doesn't necessarily have to map to a column in that table and assigning the name doesn't affect query generation. We recommend making these "virtual primary entities" unique across your semantic model. An example of defining a primary entity for a data source that doesn't have a primary entity column is below: ```yaml semantic_model: @@ -93,7 +99,7 @@ This section further explains the dimension definitions, along with examples. Di ## Categorical -Categorical is used to group metrics by different categories such as product type, color, or geographical area. They can refer to existing columns in your dbt model or be calculated using a SQL expression with the `expr` parameter. An example of a category dimension is `is_bulk_transaction`, which is a group created by applying a case statement to the underlying column `quantity`. This allows users to group or filter the data based on bulk transactions. +Categorical dimensions are used to group metrics by different attributes, features, or characteristics such as product type. They can refer to existing columns in your dbt model or be calculated using a SQL expression with the `expr` parameter. An example of a categorical dimension is `is_bulk_transaction`, which is a group created by applying a case statement to the underlying column `quantity`. This allows users to group or filter the data based on bulk transactions. ```yaml dimensions: @@ -104,15 +110,10 @@ dimensions: ## Time -:::tip use datetime data type if using BigQuery -To use BigQuery as your data platform, time dimensions columns need to be in the datetime data type. If they are stored in another type, you can cast them to datetime using the `expr` property. Time dimensions are used to group metrics by different levels of time, such as day, week, month, quarter, and year. MetricFlow supports these granularities, which can be specified using the `time_granularity` parameter. -::: - -Time has additional parameters specified under the `type_params` section. When you query one or more metrics in MetricFlow using the CLI, the default time dimension for a single metric is the aggregation time dimension, which you can refer to as `metric_time` or use the dimensions' name. +Time has additional parameters specified under the `type_params` section. When you query one or more metrics, the default time dimension for each metric is the aggregation time dimension, which you can refer to as `metric_time` or use the dimension's name. You can use multiple time groups in separate metrics. For example, the `users_created` metric uses `created_at`, and the `users_deleted` metric uses `deleted_at`: - ```bash # dbt Cloud users dbt sl query --metrics users_created,users_deleted --group-by metric_time__year --order-by metric_time__year @@ -121,8 +122,7 @@ dbt sl query --metrics users_created,users_deleted --group-by metric_time__year mf query --metrics users_created,users_deleted --group-by metric_time__year --order-by metric_time__year ``` - -You can set `is_partition` for time or categorical dimensions to define specific time spans. Additionally, use the `type_params` section to set `time_granularity` to adjust aggregation detail (like daily, weekly, and so on): +You can set `is_partition` for time to define specific time spans. Additionally, use the `type_params` section to set `time_granularity` to adjust aggregation details (hourly, daily, weekly, and so on). @@ -130,31 +130,19 @@ You can set `is_partition` for time or categorical dimensions to define specific Use `is_partition: True` to show that a dimension exists over a specific time window. For example, a date-partitioned dimensional table. When you query metrics from different tables, the dbt Semantic Layer uses this parameter to ensure that the correct dimensional values are joined to measures. -You can also use `is_partition` for [categorical](#categorical) dimensions as well. - -MetricFlow enables metric aggregation during query time. For example, you can aggregate the `messages_per_month` measure. If you originally had a `time_granularity` for the time dimensions `metric_time`, you can specify a yearly granularity for aggregation in your query: - -```bash -# dbt Cloud users -dbt sl query --metrics messages_per_month --group-by metric_time__year --order-by metric_time__year - -# dbt Core users -mf query --metrics messages_per_month --group-by metric_time__year --order metric_time__year -``` - ```yaml dimensions: - name: created_at type: time label: "Date of creation" - expr: date_trunc('day', ts_created) # ts_created is the underlying column name from the table - is_partition: True + expr: ts_created # ts_created is the underlying column name from the table + is_partition: True type_params: time_granularity: day - name: deleted_at type: time label: "Date of deletion" - expr: date_trunc('day', ts_deleted) # ts_deleted is the underlying column name from the table + expr: ts_deleted # ts_deleted is the underlying column name from the table is_partition: True type_params: time_granularity: day @@ -173,28 +161,34 @@ measures: -`time_granularity` specifies the smallest level of detail that a measure or metric should be reported at, such as daily, weekly, monthly, quarterly, or yearly. Different granularity options are available, and each metric must have a specified granularity. For example, a metric specified with weekly granularity couldn't be aggregated to a daily grain. +`time_granularity` specifies the grain of a time dimension. MetricFlow will transform the underlying column to the specified granularity. For example, if you add hourly granularity to a time dimension column, MetricFlow will run a `date_trunc` function to convert the timestamp to hourly. You can easily change the time grain at query time and aggregate it to a coarser grain, for example, from hourly to monthly. However, you can't go from a coarser grain to a finer grain (monthly to hourly). -The current options for time granularity are day, week, month, quarter, and year. +Our supported granularities are: +* nanosecond (Snowflake only) +* microsecond +* millisecond +* second +* minute +* hour -Aggregation between metrics with different granularities is possible, with the Semantic Layer returning results at the highest granularity by default. For example, when querying two metrics with daily and monthly granularity, the resulting aggregation will be at the monthly level. +Aggregation between metrics with different granularities is possible, with the Semantic Layer returning results at the coarsest granularity by default. For example, when querying two metrics with daily and monthly granularity, the resulting aggregation will be at the monthly level. ```yaml dimensions: - name: created_at type: time label: "Date of creation" - expr: date_trunc('day', ts_created) # ts_created is the underlying column name from the table + expr: ts_created # ts_created is the underlying column name from the table is_partition: True type_params: - time_granularity: day + time_granularity: hour - name: deleted_at type: time label: "Date of deletion" - expr: date_trunc('day', ts_deleted) # ts_deleted is the underlying column name from the table + expr: ts_deleted # ts_deleted is the underlying column name from the table is_partition: True type_params: - time_granularity: day + time_granularity: day measures: - name: users_deleted @@ -213,7 +207,7 @@ measures: ### SCD Type II :::caution -Currently, there are limitations in supporting SCDs. +Currently, semantic models with SCD Type II dimensions cannot contain measures. ::: MetricFlow supports joins against dimensions values in a semantic model built on top of a slowly changing dimension (SCD) Type II table. This is useful when you need a particular metric sliced by a group that changes over time, such as the historical trends of sales by a customer's country. diff --git a/website/docs/docs/build/metricflow-time-spine.md b/website/docs/docs/build/metricflow-time-spine.md index 997d85e38a8..ff3ac0eafb6 100644 --- a/website/docs/docs/build/metricflow-time-spine.md +++ b/website/docs/docs/build/metricflow-time-spine.md @@ -6,11 +6,45 @@ sidebar_label: "MetricFlow time spine" tags: [Metrics, Semantic Layer] --- -MetricFlow uses a timespine table to construct cumulative metrics. By default, MetricFlow expects the timespine table to be named `metricflow_time_spine` and doesn't support using a different name. +It's common in analytics engineering to have a date dimension or "time spine" table as a base table for different types of time-based joins and aggregations. The structure of this table is typically a base column of daily or hourly dates, with additional columns for other time grains, like fiscal quarter, defined based on the base column. You can join other tables to the time spine on the base column to calculate metrics like revenue at a point in time, or to aggregate to a specific time grain. -To create this table, you need to create a model in your dbt project called `metricflow_time_spine` and add the following code: +MetricFlow requires you to define a time spine table as a project level configuration, which then is used for various time-based joins and aggregations, like cumulative metrics. At a minimum, you need to define a time spine table for a daily grain. You can optionally define a time spine table for a different granularity, like hourly. - +If you already have a date dimension or time spine table in your dbt project, you can point MetricFlow to this table by updating the `model` configuration to use this table in the Semantic Layer. For example, given the following directory structure, you can create two time spine configurations, `time_spine_hourly` and `time_spine_daily`. + +::tip +Previously, you were required to create a model called `metricflow_time_spine` in your dbt project. This is no longer required. However, you can build your time spine model from this table if you don't have another date dimension table you want to use in your project. +::: + + + + +```yaml +models: + - name: time_spine_hourly + time_spine: + standard_granularity_column: date_hour # column for the standard grain of your table + columns: + - name: date_hour + granularity: hour # set granularity at column-level for standard_granularity_column + - name: time_spine_daily + time_spine: + standard_granularity_column: date_day # column for the standard grain of your table + columns: + - name: date_day + granularity: day # set granularity at column-level for standard_granularity_column +``` + +Now, break down the configuration above. It's pointing to a model called `time_spine_daily`. It sets the time spine configurations under the `time_spine` key. The `standard_granularity_column` is the lowest grain of the table, in this case, it's hourly. It needs to reference a column defined under the columns key, in this case, `date_hour`. Use the `standard_granularity_column` as the join key for the time spine table when joining tables in MetricFlow. Here, the granularity of the `standard_granularity_column` is set at the column level, in this case, `hour`. + + +If you need to create a time spine table from scratch, you can do so by adding the following code to your dbt project. +The example creates a time spine at a daily grain and an hourly grain. A few things to note when creating time spine models: +* MetricFlow will use the time spine with the largest compatible granularity for a given query to ensure the most efficient query possible. For example, if you have a time spine at a monthly grain, and query a dimension at a monthly grain, MetricFlow will use the monthly time spine. If you only have a daily time spine, MetricFlow will use the daily time spine and date_trunc to month. +* You can add a time spine for each granularity you intend to use if query efficiency is more important to you than configuration time, or storage constraints. For most engines, the query performance difference should be minimal and transforming your time spine to a coarser grain at query time shouldn't add significant overhead to your queries. +* We recommend having a time spine at the finest grain used in any of your dimensions to avoid unexpected errors. i.e., if you have dimensions at an hourly grain, you should have a time spine at an hourly grain. + + @@ -27,7 +61,7 @@ with days as ( dbt_utils.date_spine( 'day', "to_date('01/01/2000','mm/dd/yyyy')", - "to_date('01/01/2027','mm/dd/yyyy')" + "to_date('01/01/2025','mm/dd/yyyy')" ) }} @@ -39,6 +73,9 @@ final as ( ) select * from final +-- filter the time spine to a specific range +where date_day > dateadd(year, -4, current_timestamp()) +and date_hour < dateadd(day, 30, current_timestamp()) ``` @@ -58,7 +95,7 @@ with days as ( dbt.date_spine( 'day', "to_date('01/01/2000','mm/dd/yyyy')", - "to_date('01/01/2027','mm/dd/yyyy')" + "to_date('01/01/2025','mm/dd/yyyy')" ) }} @@ -70,6 +107,8 @@ final as ( ) select * from final +where date_day > dateadd(year, -4, current_timestamp()) +and date_hour < dateadd(day, 30, current_timestamp()) ``` @@ -86,7 +125,7 @@ with days as ( {{dbt_utils.date_spine( 'day', "DATE(2000,01,01)", - "DATE(2030,01,01)" + "DATE(2025,01,01)" ) }} ), @@ -98,6 +137,9 @@ final as ( select * from final +-- filter the time spine to a specific range +where date_day > dateadd(year, -4, current_timestamp()) +and date_hour < dateadd(day, 30, current_timestamp()) ``` @@ -112,7 +154,7 @@ with days as ( {{dbt.date_spine( 'day', "DATE(2000,01,01)", - "DATE(2030,01,01)" + "DATE(2025,01,01)" ) }} ), @@ -124,8 +166,44 @@ final as ( select * from final +-- filter the time spine to a specific range +where date_day > dateadd(year, -4, current_timestamp()) +and date_hour < dateadd(day, 30, current_timestamp()) ``` -You only need to include the `date_day` column in the table. MetricFlow can handle broader levels of detail, but it doesn't currently support finer grains. +## Hourly time spine + + +```sql +-- filename: metricflow_time_spine_hour.sql +{{ + config( + materialized = 'table', + ) +}} + +with hours as ( + + {{ + dbt.date_spine( + 'hour', + "to_date('01/01/2000','mm/dd/yyyy')", + "to_date('01/01/2025','mm/dd/yyyy')" + ) + }} + +), + +final as ( + select cast(date_hour as timestamp) as date_hour + from hours +) + +select * from final +-- filter the time spine to a specific range +where date_day > dateadd(year, -4, current_timestamp()) +and date_hour < dateadd(day, 30, current_timestamp()) +``` + diff --git a/website/docs/docs/build/metrics-overview.md b/website/docs/docs/build/metrics-overview.md index a96c22be883..586402b6847 100644 --- a/website/docs/docs/build/metrics-overview.md +++ b/website/docs/docs/build/metrics-overview.md @@ -9,7 +9,7 @@ pagination_next: "docs/build/cumulative" Once you've created your semantic models, it's time to start adding metrics. Metrics can be defined in the same YAML files as your semantic models, or split into separate YAML files into any other subdirectories (provided that these subdirectories are also within the same dbt project repo). -The keys for metrics definitions are: +This article explains the different supported metric types you can add to your dbt project. The keys for metrics definitions are: @@ -27,6 +27,8 @@ The keys for metrics definitions are: Here's a complete example of the metrics spec configuration: + + ```yaml metrics: - name: metric name ## Required @@ -42,6 +44,8 @@ metrics: {{ Dimension('entity__name') }} > 0 and {{ Dimension(' entity__another_name') }} is not null and {{ Metric('metric_name', group_by=['entity_name']) }} > 5 ``` + + @@ -61,6 +65,8 @@ metrics: Here's a complete example of the metrics spec configuration: + + ```yaml metrics: - name: metric name ## Required @@ -76,19 +82,48 @@ metrics: {{ Dimension('entity__name') }} > 0 and {{ Dimension(' entity__another_name') }} is not null and {{ Metric('metric_name', group_by=['entity_name']) }} > 5 ``` + -This page explains the different supported metric types you can add to your dbt project. - import SLCourses from '/snippets/_sl-course.md'; -### Conversion metrics +## Default granularity for metircs + +It's possible to define a default time granularity for metrics if it's different from the granularity of the default aggregation time dimensions (`metric_time`). This is useful if your time dimension has a very fine grain, like second or hour, but you typically query metrics rolled up at a coarser grain. The granularity can be set using the `time_granularity` parameter on the metric, and defaults to `day`. If day is not available because the dimension is defined at a coarser granularity, it will default to the defined granularity for the dimension. + +### Example +You have a semantic model called `orders` with a time dimension called `order_time`. You want the `orders` metric to roll up to `monthly` by default; however, you want the option to look at these metrics hourly. You can set the `time_granularity` parameter on the `order_time` dimension to `hour`, and then set the `time_granularity` parameter in the metric to `month`. +```yaml +semantic_models: + ... + dimensions: + - name: order_time + type: time + type_params: + time_granularity: hour + measures: + - name: orders + expr: 1 + agg: sum + metrics: + - name: orders + type: simple + label: Count of Orders + type_params: + measure: + name: orders + time_granularity: month -- Optional, defaults to day +``` + +## Conversion metrics [Conversion metrics](/docs/build/conversion) help you track when a base event and a subsequent conversion event occur for an entity within a set time period. + + ```yaml metrics: - name: The metric name @@ -112,11 +147,14 @@ metrics: - base_property: DIMENSION or ENTITY conversion_property: DIMENSION or ENTITY ``` + -### Cumulative metrics +## Cumulative metrics [Cumulative metrics](/docs/build/cumulative) aggregate a measure over a given window. If no window is specified, the window will accumulate the measure over all of the recorded time period. Note that you will need to create the [time spine model](/docs/build/metricflow-time-spine) before you add cumulative metrics. + + ```yaml # Cumulative metrics aggregate a measure over a given window. The window is considered infinite if no window parameter is passed (accumulate the measure over all of time) metrics: @@ -130,11 +168,14 @@ metrics: join_to_timespine: true window: 7 days ``` + -### Derived metrics +## Derived metrics [Derived metrics](/docs/build/derived) are defined as an expression of other metrics. Derived metrics allow you to do calculations on top of metrics. + + ```yaml metrics: - name: order_gross_profit @@ -149,6 +190,8 @@ metrics: - name: order_cost alias: cost ``` + + -### Ratio metrics +## Ratio metrics [Ratio metrics](/docs/build/ratio) involve a numerator metric and a denominator metric. A `filter` string can be applied to both the numerator and denominator or separately to the numerator or denominator. + + ```yaml metrics: - name: cancellation_rate @@ -191,8 +236,9 @@ metrics: filter: | {{ Dimension('customer__country') }} = 'MX' ``` + -### Simple metrics +## Simple metrics [Simple metrics](/docs/build/simple) point directly to a measure. You may think of it as a function that takes only one measure as the input. @@ -200,6 +246,8 @@ metrics: **Note:** If you've already defined the measure using the `create_metric: True` parameter, you don't need to create simple metrics. However, if you would like to include a constraint on top of the measure, you will need to create a simple type metric. + + ```yaml metrics: - name: cancellations @@ -214,6 +262,7 @@ metrics: {{ Dimension('order__value')}} > 100 and {{Dimension('user__acquisition')}} is not null join_to_timespine: true ``` + ## Filters @@ -221,6 +270,8 @@ A filter is configured using Jinja templating. Use the following syntax to refer Refer to [Metrics as dimensions](/docs/build/ref-metrics-in-filters) for details on how to use metrics as dimensions with metric filters: + + ```yaml filter: | {{ Entity('entity_name') }} @@ -232,10 +283,20 @@ filter: | {{ TimeDimension('time_dimension', 'granularity') }} filter: | - {{ Metric('metric_name', group_by=['entity_name']) }} # Available in v1.8 or with [versionless (/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) dbt Cloud] + {{ Metric('metric_name', group_by=['entity_name']) }} # Available in v1.8 or with [versionless (/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) dbt Cloud. +``` + + + +For example, if you want to filter for the order date dimension grouped by month, use the following syntax: + +```yaml +filter: | + {{ TimeDimension('order_date', 'month') }} + ``` -### Further configuration +## Further configuration You can set more metadata for your metrics, which can be used by other tools later on. The way this metadata is used will vary based on the specific integration partner diff --git a/website/docs/docs/build/semantic-models.md b/website/docs/docs/build/semantic-models.md index e136b2a064d..d683d7cd020 100644 --- a/website/docs/docs/build/semantic-models.md +++ b/website/docs/docs/build/semantic-models.md @@ -227,20 +227,20 @@ You can refer to entities (join keys) in a semantic model using the `name` param ### Dimensions -[Dimensions](/docs/build/dimensions) are different ways to organize or look at data. For example, you might group data by things like region, country, or what job someone has. However, trying to set up a system that covers every possible way to group data can be time-consuming and prone to errors. +[Dimensions](/docs/build/dimensions) are different ways to organize or look at data. They are effectively the group by parameters for metrics. For example, you might group data by things like region, country, or job title. -Instead of trying to figure out all the possible groupings ahead of time, MetricFlow lets you ask for the data you need and sorts out how to group it dynamically. You tell it what groupings (dimensions parameters) you're interested in by giving it a `name` (either a column or SQL expression like "country" or "user role") and the `type` of grouping it is (`categorical` or `time`). Categorical groups are for things you can't measure in numbers, while time groups represent dates. +MetricFlow takes a dynamic approach when making dimensions available for metrics. Instead of trying to figure out all the possible groupings ahead of time, MetricFlow lets you ask for the dimensions you need and constructs any joins necessary to reach the requested dimensions at query time. The advantage of this approach is that you don't need to set up a system that pre-materializes every possible way to group data, which can be time-consuming and prone to errors. Instead, you define the dimensions (group by parameters) you're interested in within the semantic model, and they will automatically be made available for valid metrics. -- Dimensions are identified using the name parameter, just like identifiers. -- The naming of groups must be unique within a semantic model, but not across semantic models since MetricFlow, uses entities to determine the appropriate groups. -- MetricFlow requires all dimensions to be tied to a primary entity. +Dimensions have the following characteristics: + +- There are two types of dimensions: categorical and time. Categorical dimensions are for things you can't measure in numbers, while time dimensions represent dates and timestamps. +- Dimensions are bound to the primary entity of the semantic model in which they are defined. For example, if a dimension called `full_name` is defined in a model with `user` as a primary entity, then `full_name` is scoped to the `user` entity. To reference this dimension, you would use the fully qualified dimension name `user__full_name`. +- The naming of dimensions must be unique in each semantic model with the same primary entity. Dimension names can be repeated if defined in semantic models with a different primary entity. -While there's technically no limit to the number of dimensions in a semantic model, it's important to ensure the model remains effective and efficient for its intended purpose. :::info For time groups For semantic models with a measure, you must have a [primary time group](/docs/build/dimensions#time). - ::: ### Measures diff --git a/website/docs/docs/build/simple.md b/website/docs/docs/build/simple.md index a5294c5eeb8..f57d498d290 100644 --- a/website/docs/docs/build/simple.md +++ b/website/docs/docs/build/simple.md @@ -11,7 +11,7 @@ Simple metrics are metrics that directly reference a single measure, without any The parameters, description, and type for simple metrics are: - :::tip +:::tip Note that we use the double colon (::) to indicate whether a parameter is nested within another parameter. So for example, `query_params::metrics` means the `metrics` parameter is nested under `query_params`. ::: diff --git a/website/docs/docs/dbt-versions/release-notes.md b/website/docs/docs/dbt-versions/release-notes.md index 4158a249560..a9db34334ad 100644 --- a/website/docs/docs/dbt-versions/release-notes.md +++ b/website/docs/docs/dbt-versions/release-notes.md @@ -18,6 +18,9 @@ Release notes are grouped by month for both multi-tenant and virtual private clo \* The official release date for this new format of release notes is May 15th, 2024. Historical release notes for prior dates may not reflect all available features released earlier this year or their tenancy availability. +## August 2024 +- **New**: You can now configure metrics at granularities at finer time grains, such as hour, minute, or even by the second. This is particularly useful for more detailed analysis and for datasets where high-resolution time data is required, such as minute-by-minute event tracking. Refer to [dimensions](/docs/build/dimensions) for more information about time granularity. + ## July 2024 - **New:** [Connections](/docs/cloud/connect-data-platform/about-connections#connection-management) are now available under **Account settings** as a global setting. Previously, they were found under **Project settings**. This is being rolled out in phases over the coming weeks. - **New:** Admins can now assign [environment-level permissions](/docs/cloud/manage-access/environment-permissions) to groups for specific roles. diff --git a/website/static/img/time_spines.png b/website/static/img/time_spines.png new file mode 100644 index 00000000000..ef7477c3a01 Binary files /dev/null and b/website/static/img/time_spines.png differ