Skip to content

Commit

Permalink
Loki Release: update release notes and docs (#2808)
Browse files Browse the repository at this point in the history
* update release notes and docs!

* add go and cortex version

* tweaks

* tweak wording

* change paths

* typo, thanks Cyril ;)

* thanks Owen ;)
  • Loading branch information
slim-bean authored Oct 26, 2020
1 parent fa00b90 commit 6978ee5
Show file tree
Hide file tree
Showing 12 changed files with 838 additions and 514 deletions.
210 changes: 210 additions & 0 deletions CHANGELOG.md

Large diffs are not rendered by default.

4 changes: 3 additions & 1 deletion docs/sources/alerting/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ weight: 700

Loki includes a component called the Ruler, adapted from our upstream project, Cortex. The Ruler is responsible for continually evaluating a set of configurable queries and then alerting when certain conditions happen, e.g. a high percentage of error logs.

First, ensure the Ruler component is enabled. The following is a basic configuration which loads rules from configuration files (it requires `/tmp/rules` and `/tmp/scratch` exist):
First, ensure the Ruler component is enabled. The following is a basic configuration which loads rules from configuration files:

```yaml
ruler:
Expand Down Expand Up @@ -168,6 +168,8 @@ Because the rule files are identical to Prometheus rule files, we can interact w

> **Note:** Not all commands in cortextool currently support Loki.

> **Note:** cortextool was intended to run against multi-tenant Loki, commands need an `--id=` flag set to the Loki instance ID or set the environment variable `CORTEX_TENANT_ID`. If Loki is running in single tenant mode, the required ID is `fake` (yes we know this might seem alarming but it's totally fine, no it can't be changed)

An example workflow is included below:

```sh
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/configuration/examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ storage_config:

This is a configuration to deploy Loki depending only on storage solution, e.g. an
S3-compatible API like minio. The ring configuration is based on the gossip memberlist
and the index is shipped to storage via [boltdb-shipper](../../operations/storage/boltdb-shipper/).
and the index is shipped to storage via [Single Store (boltdb-shipper)](../../operations/storage/boltdb-shipper/).

```yaml
auth_enabled: false
Expand Down
1 change: 1 addition & 0 deletions docs/sources/getting-started/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ weight: 300
---
# Getting started with Loki

1. [Getting Logs Into Loki](get-logs-into-loki/)
1. [Grafana](grafana/)
1. [LogCLI](logcli/)
1. [Labels](labels/)
Expand Down
13 changes: 7 additions & 6 deletions docs/sources/getting-started/grafana.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,21 +6,22 @@ title: Loki in Grafana
Grafana ships with built-in support for Loki for versions greater than
[6.0](https://grafana.com/grafana/download/6.0.0). Using
[6.3](https://grafana.com/grafana/download/6.3.0) or later is highly
recommended to take advantage of new LogQL functionality.
recommended to take advantage of new [LogQL]({{< relref "../logql/_index.md" >}}) functionality.

1. Log into your Grafana instance. If this is your first time running
Grafana, the username and password are both defaulted to `admin`.
2. In Grafana, go to `Configuration` > `Data Sources` via the cog icon on the
1. In Grafana, go to `Configuration` > `Data Sources` via the cog icon on the
left sidebar.
3. Click the big <kbd>+ Add data source</kbd> button.
4. Choose Loki from the list.
5. The http URL field should be the address of your Loki server. For example,
1. Click the big <kbd>+ Add data source</kbd> button.
1. Choose Loki from the list.
1. The http URL field should be the address of your Loki server. For example,
when running locally or with Docker using port mapping, the address is
likely `http://localhost:3100`. When running with docker-compose or
Kubernetes, the address is likely `http://loki:3100`.
6. To see the logs, click <kbd>Explore</kbd> on the sidebar, select the Loki
1. To see the logs, click <kbd>Explore</kbd> on the sidebar, select the Loki
datasource in the top-left dropdown, and then choose a log stream using the
<kbd>Log labels</kbd> button.
1. Learn more about querying by reading about Loki's query language [LogQL]({{< relref "../logql/_index.md" >}}).

Read more about Grafana's Explore feature in the
[Grafana documentation](http://docs.grafana.org/features/explore) and on how to
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/getting-started/logcli.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: LogCLI
---
# Querying Loki with LogCLI

If you prefer a command line interface, LogCLI also allows users to run LogQL
If you prefer a command line interface, LogCLI also allows users to run [LogQL]({{< relref "../logql/_index.md" >}})
queries against a Loki server.

## Installation
Expand Down
4 changes: 3 additions & 1 deletion docs/sources/operations/storage/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: Storage
---
# Loki Storage

[High level storage overview here]({{< relref "../../storage/_index.md" >}})

Loki needs to store two different types of data: **chunks** and **indexes**.

Loki receives logs in separate streams, where each stream is uniquely identified
Expand All @@ -25,11 +27,11 @@ For more information:

The following are supported for the index:

- [Single Store (boltdb-shipper) - Recommended for 2.0 and newer](boltdb-shipper/) index store which stores boltdb index files in the object store
- [Amazon DynamoDB](https://aws.amazon.com/dynamodb)
- [Google Bigtable](https://cloud.google.com/bigtable)
- [Apache Cassandra](https://cassandra.apache.org)
- [BoltDB](https://github.com/boltdb/bolt) (doesn't work when clustering Loki)
- [BoltDB Shipper](boltdb-shipper/) EXPERIMENTAL index store which stores boltdb index files in the object store

The following are supported for the chunks:

Expand Down
6 changes: 2 additions & 4 deletions docs/sources/operations/storage/boltdb-shipper.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
---
title: BoltDB Shipper
title: Single Store (boltdb-shipper)
---
# Loki with BoltDB Shipper

:warning: BoltDB Shipper is still an experimental feature. It is not recommended to be used in production environments.
# Single Store Loki (boltdb-shipper index type)

BoltDB Shipper lets you run Loki without any dependency on NoSQL stores for storing index.
It locally stores the index in BoltDB files instead and keeps shipping those files to a shared object store i.e the same object store which is being used for storing chunks.
Expand Down
100 changes: 0 additions & 100 deletions docs/sources/operations/storage/filesystem.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,103 +42,3 @@ The durability of the objects is at the mercy of the filesystem itself where oth
### High Availability

Running Loki clustered is not possible with the filesystem store unless the filesystem is shared in some fashion (NFS for example). However using shared filesystems is likely going to be a bad experience with Loki just as it is for almost every other application.

## New AND VERY EXPERIMENTAL in 1.5.0: Horizontal scaling of the filesystem store

**WARNING** as the title suggests, this is very new and potentially buggy, and it is also very likely configs around this feature will change over time.

With that warning out of the way, the addition of the [boltdb-shipper](../boltdb-shipper/) index store has added capabilities making it possible to overcome many of the limitations listed above using the filesystem store, specifically running Loki with the filesystem store on separate machines but still operate as a cluster supporting replication, and write distribution via the hash ring.

As mentioned in the title, this is very alpha at this point but we would love for people to try this and help us flush out bugs.

Here is an example config to run with Loki:

Use this config on multiple computers (or containers), do not run it on the same computer as Loki uses the hostname as the ID in the ring.

Do not use a shared fileystem such as NFS for this, each machine should have its own filesystem

```yaml
auth_enabled: false # single tenant mode
server:
http_listen_port: 3100
ingester:
max_transfer_retries: 0 # Disable blocks transfers on ingesters shutdown or rollout.
chunk_idle_period: 2h # Let chunks sit idle for at least 2h before flushing, this helps to reduce total chunks in store
max_chunk_age: 2h # Let chunks get at least 2h old before flushing due to age, this helps to reduce total chunks in store
chunk_target_size: 1048576 # Target chunks of 1MB, this helps to reduce total chunks in store
chunk_retain_period: 30s
query_store_max_look_back_period: -1 # This will allow the ingesters to query the store for all data
lifecycler:
heartbeat_period: 5s
interface_names:
- eth0
join_after: 30s
num_tokens: 512
ring:
heartbeat_timeout: 1m
kvstore:
consul:
consistent_reads: true
host: localhost:8500
http_client_timeout: 20s
store: consul
replication_factor: 1 # This can be increased and probably should if you are running multiple machines!
schema_config:
configs:
- from: 2018-04-15
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 168h
storage_config:
boltdb_shipper:
shared_store: filesystem
active_index_directory: /tmp/loki/index
cache_location: /tmp/loki/boltdb-cache
filesystem:
directory: /tmp/loki/chunks
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
chunk_store_config:
max_look_back_period: 0s # No limit how far we can look back in the store
table_manager:
retention_deletes_enabled: false
retention_period: 0s # No deletions, infinite retention
```

It does require Consul to be running for the ring (any of the ring stores will work: consul, etcd, memberlist, Consul is used in this example)

It is also required that Consul be available from each machine, this example only specifies `host: localhost:8500` you would likely need to change this to the correct hostname/ip and port of your consul server.

**The config needs to be the same on every Loki instance!**

The important piece of this config is `query_store_max_look_back_period: -1` this tells Loki to allow the ingesters to look in the store for all the data.

Traffic can be sent to any of the Loki servers, it can be round-robin load balanced if desired.

Each Loki instance will use Consul to properly route both read and write data to the correct Loki instance.

Scaling up is as easy as adding more loki instances and letting them talk to the same ring.

Scaling down is harder but possible. You would need to shutdown a Loki server then take everything in:

```yaml
filesystem:
directory: /tmp/loki/chunks
```

And copy it to the same directory on another Loki server, there is currently no way to split the chunks between servers you must move them all. We expect to provide more options here in the future.


Loading

0 comments on commit 6978ee5

Please sign in to comment.