Skip to content

Commit

Permalink
Merge branch 'main' into fix-compactor-dashboard
Browse files Browse the repository at this point in the history
  • Loading branch information
QuentinBisson committed Apr 16, 2024
2 parents 7874b6f + eaa06f8 commit 68a8bb9
Show file tree
Hide file tree
Showing 16 changed files with 166 additions and 31 deletions.
1 change: 0 additions & 1 deletion .github/workflows/publish-technical-documentation-next.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ on:
jobs:
sync:
runs-on: "ubuntu-latest"
needs: "test"
steps:
- name: "Check out code"
uses: "actions/checkout@v4"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ on:
jobs:
sync:
runs-on: "ubuntu-latest"
needs: "test"
steps:
- name: "Checkout code and tags"
uses: "actions/checkout@v4"
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/operations/scalability.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ and scaling for resource usage.
The Query frontend has an in-memory queue that can be moved out into a separate process similar to the
[Grafana Mimir query-scheduler](/docs/mimir/latest/operators-guide/architecture/components/query-scheduler/). This allows running multiple query frontends.

To run with the Query Scheduler, the frontend needs to be passed the scheduler's address via `-frontend.scheduler-address` and the querier processes needs to be started with `-querier.scheduler-address` set to the same address. Both options can also be defined via the [configuration file]({{< relref "../configure/_index.md" >}}).
To run with the Query Scheduler, the frontend needs to be passed the scheduler's address via `-frontend.scheduler-address` and the querier processes needs to be started with `-querier.scheduler-address` set to the same address. Both options can also be defined via the [configuration file](https://grafana.com/docs/loki/<LOKI_VERSION>/configure).

It is not valid to start the querier with both a configured frontend and a scheduler address.

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/reference/loki-http-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -1040,7 +1040,7 @@ GET /config
```

`/config` exposes the current configuration. The optional `mode` query parameter can be used to
modify the output. If it has the value `diff` only the differences between the default configuration
modify the output. If it has the value `diffs` only the differences between the default configuration
and the current are returned. A value of `defaults` returns the default configuration.

In microservices mode, the `/config` endpoint is exposed by all components.
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/release-notes/v2-6.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Grafana Labs is excited to announce the release of Loki 2.6. Here's a summary of
- **Query multiple tenants at once.** We've introduced cross-tenant query federation, which allows you to issue one query to multiple tenants and get a single, consolidated result. This is great for scenarios where you need a global view of logs within your multi-tenant cluster. For more information on how to enable this feature, see [Multi-Tenancy]({{< relref "../operations/multi-tenancy.md" >}}).
- **Filter out and delete certain log lines from query results.** This is particularly useful in cases where users may accidentally write sensitive information to Loki that they do not want exposed. Users craft a LogQL query that selects the specific lines they're interested in, and then can choose to either filter out those lines from query results, or permanently delete them from Loki's storage. For more information, see [Logs Deletion]({{< relref "../operations/storage/logs-deletion.md" >}}).
- **Improved query performance on instant queries.** Loki now splits instant queries with a large time range (for example, `sum(rate({app="foo"}[6h]))`) into several smaller sub-queries and executes them in parallel. Users don't need to take any action to enjoy this performance improvement; however, they can adjust the number of sub-queries generated by modifying the `split_queries_by_interval` configuration parameter, which currently defaults to `30m`.
- **Support Baidu AI Cloud as a storage backend.** Loki users can now use Baidu Object Storage (BOS) as their storage backend. See [bos_storage_config]({{< relref "../configure/_index.md#bos_storage_config" >}}) for details.
- **Support Baidu AI Cloud as a storage backend.** Loki users can now use Baidu Object Storage (BOS) as their storage backend. See [bos_storage_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/) for details.

For a full list of all changes, look at the [CHANGELOG](https://github.com/grafana/loki/blob/main/CHANGELOG.md).

Expand All @@ -40,4 +40,4 @@ A summary of some of the more important fixes:
- [PR 6152](https://github.com/grafana/loki/pull/6152) Fixed a scenario where live tailing of logs could cause unbounded ingester memory growth.
- [PR 5685](https://github.com/grafana/loki/pull/5685) Fixed a bug in Loki's push request parser that allowed users to send arbitrary non-string data as a log line. We now test that the pushed values are valid strings and return an error if values are not valid strings.
- [PR 5799](https://github.com/grafana/loki/pull/5799) Fixed incorrect deduplication logic for cases where multiple log entries with the same timestamp exist.
- [PR 5888](https://github.com/grafana/loki/pull/5888) Fixed a bug in the [common configuration]({{< relref "../configure/_index.md#common" >}}) where the `instance_interface_names` setting was getting overwritten by the default ring configuration.
- [PR 5888](https://github.com/grafana/loki/pull/5888) Fixed a bug in the [common configuration](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#common) where the `instance_interface_names` setting was getting overwritten by the default ring configuration.
1 change: 0 additions & 1 deletion docs/sources/setup/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ weight: 300

# Setup Loki

- Estimate the initial [size]({{< relref "./size" >}}) for your Loki cluster.
- [Install]({{< relref "./install" >}}) Loki.
- [Migrate]({{< relref "./migrate" >}}) from one Loki implementation to another.
- [Upgrade]({{< relref "./upgrade" >}}) from one Loki version to a newer version.
4 changes: 0 additions & 4 deletions docs/sources/setup/install/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,6 @@ There are several methods of installing Loki and Promtail:
- [Install and run locally]({{< relref "./local" >}})
- [Install from source]({{< relref "./install-from-source" >}})

The [Sizing Tool]({{< relref "../size" >}}) can be used to determine the proper cluster sizing
given an expected ingestion rate and query performance. It targets the Helm
installation on Kubernetes.

## General process

In order to run Loki, you must:
Expand Down
27 changes: 26 additions & 1 deletion docs/sources/setup/install/helm/install-monolithic/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ weight: 100

This Helm Chart installation runs the Grafana Loki *single binary* within a Kubernetes cluster.

If you set the `singleBinary.replicas` value to 1, this chart configures Loki to run the `all` target in a [monolithic mode]({{< relref "../../../../get-started/deployment-modes#monolithic-mode" >}}), designed to work with a filesystem storage. It will also configure meta-monitoring of metrics and logs.
If you set the `singleBinary.replicas` value to 1 and set the deployment mode to `SingleBinary`, this chart configures Loki to run the `all` target in a [monolithic mode](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/deployment-modes/#monolithic-mode), designed to work with a filesystem storage. It will also configure meta-monitoring of metrics and logs.
If you set the `singleBinary.replicas` value to 2 or more, this chart configures Loki to run a *single binary* in a replicated, highly available mode. When running replicas of a single binary, you must configure object storage.

**Before you begin: Software Requirements**
Expand All @@ -39,13 +39,29 @@ If you set the `singleBinary.replicas` value to 2 or more, this chart configures
- If running a single replica of Loki, configure the `filesystem` storage:

```yaml
mode: SingleBinary
loki:
commonConfig:
replication_factor: 1
storage:
type: 'filesystem'
schemaConfig:
configs:
- from: 2024-01-01
store: tsdb
index:
prefix: loki_index_
period: 24h
object_store: filesystem # we're storing on filesystem so there's no real persistence here.
schema: v13
singleBinary:
replicas: 1
read:
replicas: 0
backend:
replicas: 0
write:
replicas: 0
```

- If running Loki with a replication factor greater than 1, set the desired number replicas and provide object storage credentials:
Expand All @@ -54,6 +70,15 @@ If you set the `singleBinary.replicas` value to 2 or more, this chart configures
loki:
commonConfig:
replication_factor: 3
schemaConfig:
configs:
- from: 2024-01-01
store: tsdb
index:
prefix: loki_index_
period: 24h
object_store: filesystem
schema: v13
storage:
bucketNames:
chunks: loki-chunks
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/setup/size/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ aliases:
- ../installation/sizing/
- ../installation/helm/generate
weight: 100
keywords: []
draft: true
---

<link rel="stylesheet" href="../../query/analyzer/style.css">
Expand Down
16 changes: 10 additions & 6 deletions docs/sources/setup/upgrade/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,10 @@ If no label is found matching the list, a value of `unknown_service` is applied.
You can change this list by providing a list of labels to `discover_service_name` in the [limits_config](/docs/loki/<LOKI_VERSION>/configure/#limits_config) block.
{{< admonition type="note" >}}
If you are already using a `service_label`, Loki will not make a new assignment.
{{< /admonition >}}
**You can disable this by providing an empty value for `discover_service_name`.**
#### Removed `shared_store` and `shared_store_key_prefix` from shipper configuration
Expand Down Expand Up @@ -171,7 +175,7 @@ The path prefix under which the delete requests are stored is decided by `-compa
#### Configuration `async_cache_write_back_concurrency` and `async_cache_write_back_buffer_size` have been removed
These configurations were redundant with the `Background` configuration in the [cache-config]({{< relref "../../configure#cache_config" >}}).
These configurations were redundant with the `Background` configuration in the [cache-config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#cache_config).
`async_cache_write_back_concurrency` can be set with `writeback_goroutines`
`async_cache_write_back_buffer_size` can be set with `writeback_buffer`
Expand Down Expand Up @@ -277,7 +281,7 @@ The TSDB index type has support for caching results for 'stats' and 'volume' que
All of these are cached to the `results_cache` which is configured in the `query_range` config section. By default, an in memory cache is used.
#### Write dedupe cache is deprecated
Write dedupe cache is deprecated because it not required by the newer single store indexes ([TSDB]({{< relref "../../operations/storage/tsdb" >}}) and [boltdb-shipper]({{< relref "../../operations/storage/boltdb-shipper" >}})).
Write dedupe cache is deprecated because it not required by the newer single store indexes ([TSDB](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/tsdb/) and [boltdb-shipper](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/boltdb-shipper/)).
If you using a [legacy index type](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/#index-storage), consider migrating to TSDB (recommended).
#### Embedded cache metric changes
Expand Down Expand Up @@ -761,7 +765,7 @@ This histogram reports the distribution of log line sizes by file. It has 8 buck

This creates a lot of series and we don't think this metric has enough value to offset the amount of series genereated so we are removing it.
While this isn't a direct replacement, two metrics we find more useful are size and line counters configured via pipeline stages, an example of how to configure these metrics can be found in the [metrics pipeline stage docs]({{< relref "../../send-data/promtail/stages/metrics#counter" >}}).
While this isn't a direct replacement, two metrics we find more useful are size and line counters configured via pipeline stages, an example of how to configure these metrics can be found in the [metrics pipeline stage docs](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/promtail/stages/metrics/#counter).

#### `added Docker target` log message has been demoted from level=error to level=info

Expand Down Expand Up @@ -815,7 +819,7 @@ limits_config:
retention_period: [30d]
```

See the [retention docs]({{< relref "../../operations/storage/retention" >}}) for more info.
See the [retention docs](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/retention/) for more info.

#### Log messages on startup: proto: duplicate proto type registered:

Expand Down Expand Up @@ -1286,7 +1290,7 @@ If you happen to have `results_cache.max_freshness` set, use `limits_config.max_

### Promtail config removed

The long deprecated `entry_parser` config in Promtail has been removed, use [pipeline_stages]({{< relref "../../send-data/promtail/configuration#pipeline_stages" >}}) instead.
The long deprecated `entry_parser` config in Promtail has been removed, use [pipeline_stages](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/promtail/configuration/#pipeline_stages) instead.

### Upgrading schema to use boltdb-shipper and/or v11 schema

Expand Down Expand Up @@ -1616,7 +1620,7 @@ max_retries:
Loki 1.4.0 vendors Cortex v0.7.0-rc.0 which contains [several breaking config changes](https://github.com/cortexproject/cortex/blob/v0.7.0-rc.0/CHANGELOG).
In the [cache_config]({{< relref "../../configure#cache_config" >}}), `defaul_validity` has changed to `default_validity`.
In the [cache_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure#cache_config), `defaul_validity` has changed to `default_validity`.
If you configured your schema via arguments and not a config file, this is no longer supported. This is not something we had ever provided as an option via docs and is unlikely anyone is doing, but worth mentioning.
Expand Down
3 changes: 2 additions & 1 deletion docs/sources/shared/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -3148,7 +3148,8 @@ The `limits_config` block configures global and per-tenant limits in Loki. The v
[max_querier_bytes_read: <int> | default = 150GB]

# Enable log-volume endpoints.
[volume_enabled: <boolean>]
# CLI flag: -limits.volume-enabled
[volume_enabled: <boolean> | default = true]

# The maximum number of aggregated series in a log-volume response
# CLI flag: -limits.volume-max-series
Expand Down
24 changes: 17 additions & 7 deletions docs/sources/visualize/grafana.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,17 +13,16 @@ keywords:
---
# Visualize log data

[Grafana 6.0](/grafana/download/6.0.0) and more recent
versions have built-in support for Grafana Loki.
Use [Grafana 6.3](/grafana/download/6.3.0) or a more
recent version to take advantage of [LogQL]({{< relref "../query/_index.md" >}}) functionality.
Modern Grafana versions after 6.3 have built-in support for Grafana Loki and [LogQL](https://grafana.com/docs/loki/<LOKI_VERSION>/query/).

## Using Explore

1. Log into your Grafana instance. If this is your first time running
Grafana, the username and password are both defaulted to `admin`.
1. In Grafana, go to `Configuration` > `Data Sources` via the cog icon on the
1. In Grafana, go to `Connections` > `Data Sources` via the cog icon on the
left sidebar.
1. Click the big <kbd>+ Add data source</kbd> button.
1. Choose Loki from the list.
1. Click the big <kbd>+ Add a new data source</kbd> button.
1. Search for, or choose Loki from the list.
1. The http URL field should be the address of your Loki server. For example,
when running locally or with Docker using port mapping, the address is
likely `http://localhost:3100`. When running with docker-compose or
Expand All @@ -36,10 +35,21 @@ recent version to take advantage of [LogQL]({{< relref "../query/_index.md" >}})
<kbd>Log labels</kbd> button.
1. Learn more about querying by reading about Loki's query language [LogQL]({{< relref "../query/_index.md" >}}).

If you would like to see an example of this live, you can try [Grafana Play's Explore feature](https://play.grafana.org/explore?schemaVersion=1&panes=%7B%22v1d%22:%7B%22datasource%22:%22ac4000ca-1959-45f5-aa45-2bd0898f7026%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22%7Bagent%3D%5C%22promtail%5C%22%7D%20%7C%3D%20%60%60%22,%22queryType%22:%22range%22,%22datasource%22:%7B%22type%22:%22loki%22,%22uid%22:%22ac4000ca-1959-45f5-aa45-2bd0898f7026%22%7D,%22editorMode%22:%22builder%22%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D%7D&orgId=1)

Read more about Grafana's Explore feature in the
[Grafana documentation](http://docs.grafana.org/features/explore) and on how to
search and filter for logs with Loki.

## Using Grafana Dashboards

Because Loki can be used as a built-in data source above, we can use LogQL queries based on that datasource
to build complex visualizations that persist on Grafana dashboards.

{{< docs/play title="Loki Example Grafana Dashboard" url="https://play.grafana.org/d/T512JVH7z/" >}}

Read more about how to build Grafana Dashboards in [build your first dashbboard](https://grafana.com/docs/grafana/latest/getting-started/build-first-dashboard/)

To configure Loki as a data source via provisioning, see [Configuring Grafana via
Provisioning](http://docs.grafana.org/features/datasources/loki/#configure-the-datasource-with-provisioning).
Set the URL in the provisioning.
4 changes: 2 additions & 2 deletions pkg/distributor/distributor.go
Original file line number Diff line number Diff line change
Expand Up @@ -919,7 +919,7 @@ func extractLogLevelFromLogLine(log string) string {
return logLevelDebug
}
if strings.Contains(log, `:"info"`) || strings.Contains(log, `:"INFO"`) {
return logLevelDebug
return logLevelInfo
}
}

Expand All @@ -940,7 +940,7 @@ func extractLogLevelFromLogLine(log string) string {
return logLevelDebug
}
if strings.Contains(log, "=info") || strings.Contains(log, "=INFO") {
return logLevelDebug
return logLevelInfo
}
}

Expand Down
1 change: 1 addition & 0 deletions pkg/validation/limits.go
Original file line number Diff line number Diff line change
Expand Up @@ -385,6 +385,7 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) {
_ = l.MaxStructuredMetadataSize.Set(defaultMaxStructuredMetadataSize)
f.Var(&l.MaxStructuredMetadataSize, "limits.max-structured-metadata-size", "Maximum size accepted for structured metadata per entry. Default: 64 kb. Any log line exceeding this limit will be discarded. There is no limit when unset or set to 0.")
f.IntVar(&l.MaxStructuredMetadataEntriesCount, "limits.max-structured-metadata-entries-count", defaultMaxStructuredMetadataCount, "Maximum number of structured metadata entries per log line. Default: 128. Any log line exceeding this limit will be discarded. There is no limit when unset or set to 0.")
f.BoolVar(&l.VolumeEnabled, "limits.volume-enabled", true, "Enable log volume endpoint.")
}

// SetGlobalOTLPConfig set GlobalOTLPConfig which is used while unmarshaling per-tenant otlp config to use the default list of resource attributes picked as index labels.
Expand Down
Loading

0 comments on commit 68a8bb9

Please sign in to comment.