Dataset Viewer
hash
stringlengths 40
40
| date
stringdate 2018-04-16 01:47:39
2025-03-21 00:39:07
| author
stringlengths 2
31
| commit_message
stringlengths 8
14.4k
| is_merge
bool 1
class | git_diff
stringlengths 172
25.1M
⌀ | type
stringclasses 64
values | masked_commit_message
stringlengths 8
14.4k
|
---|---|---|---|---|---|---|---|
b23c6ff5bfdb9112a72a55a1058994f58fcb7280 | 2019-04-09 11:16:34 | Juho Mäkinen | Improve documentation based on what I learned when I did loki setup.
* Explain that api needs the start parameter set if you want to see logs older than one hour
* Explain that S3 and DynamoDB can also use instance roles in EC2
* Add couple of hints how to debug Promtail
* Add documentation how the Promtail prometheus scrape_configs work | false | diff --git a/docs/api.md b/docs/api.md
index baed6f089d672..16159625dbed4 100644
--- a/docs/api.md
+++ b/docs/api.md
@@ -28,11 +28,15 @@ The Loki server has the following API endpoints (_Note:_ Authentication is out o
- `query`: a logQL query
- `limit`: max number of entries to return
- - `start`: the start time for the query, as a nanosecond Unix epoch (nanoseconds since 1970)
- - `end`: the end time for the query, as a nanosecond Unix epoch (nanoseconds since 1970)
+ - `start`: the start time for the query, as a nanosecond Unix epoch (nanoseconds since 1970). Default is always one hour ago.
+ - `end`: the end time for the query, as a nanosecond Unix epoch (nanoseconds since 1970).
- `direction`: `forward` or `backward`, useful when specifying a limit
- `regexp`: a regex to filter the returned results, will eventually be rolled into the query language
+ Loki needs to query the index store in order to find log streams for particular labels and the store is spread out by time,
+ so you need to specify the start and end labels accordingly. Querying a long time into the history will cause additional
+ load to the index server and make the query slower.
+
Responses looks like this:
```
diff --git a/docs/operations.md b/docs/operations.md
index ffd7a4ed82931..5c5bb4bb26078 100644
--- a/docs/operations.md
+++ b/docs/operations.md
@@ -116,6 +116,18 @@ storage_config:
dynamodb: dynamodb://access_key:secret_access_key@region
```
+You can also use an EC2 instance role instead of hard coding credentials like in the above example.
+If you wish to do this the storage_config example looks like this:
+
+```yaml
+storage_config:
+ aws:
+ s3: s3://region/bucket_name
+ dynamodbconfig:
+ dynamodb: dynamodb://region
+```
+
+
#### S3
Loki is using S3 as object storage. It stores log within directories based on
@@ -138,6 +150,10 @@ You can setup DynamoDB by yourself, or have `table-manager` setup for you.
You can find out more info about table manager at
[Cortex project](https://github.com/cortexproject/cortex)(https://github.com/cortexproject/cortex).
There is an example table manager deployment inside the ksonnet deployment method. You can find it [here](../production/ksonnet/loki/table-manager.libsonnet)
+The table-manager allows deleting old indices by rotating a number of different dynamodb tables and deleting the oldest one. If you choose to
+create the table manually you cannot easily erase old data and your index just grows indefinitely.
If you set your DynamoDB table manually, ensure you set the primary index key to `h`
-(string) and use `r` (binary) as the sort key. Make sure adjust your throughput base on your usage.
+(string) and use `r` (binary) as the sort key. Also set the "perior" attribute in the yaml to zero.
+Make sure adjust your throughput base on your usage.
+
diff --git a/docs/promtail.md b/docs/promtail.md
new file mode 100644
index 0000000000000..b8be25cec75af
--- /dev/null
+++ b/docs/promtail.md
@@ -0,0 +1,71 @@
+## Promtail and scrape_configs
+
+Promtail is an agent which reads the Kubernets pod log files and sends streams of log data to
+the centralised Loki instances along with a set of labels. Each container in a single pod will usually yield a
+single log stream with a set of labels based on that particular pod Kubernetes labels.
+
+The way how Promtail finds out the log locations and extracts the set of labels is by using the *scrape_configs*
+section in the Promtail yaml configuration. The syntax is the same what Prometheus uses.
+
+The scrape_configs contains one or more *entries* which are all executed for each container in each new pod running
+in the instance. If more than one entry matches your logs you will get duplicates as the logs are sent in more than
+one stream, likely with a slightly different labels. Everything is based on different labels.
+The term "label" here is used in more than one different way and they can be easily confused.
+
+* Labels starting with __ (two underscores) are internal labels. They are not stored to the loki index and are
+ invisible after Promtail. They "magically" appear from different sources.
+* Labels starting with __meta_kubernetes_pod_label_* are "meta labels" which are generated based on your kubernetes
+ pod labels. Example: If your kubernetes pod has a label "name" set to "foobar" then the scrape_configs section
+ will have a label __meta_kubernetes_pod_label_name with value set to "foobar".
+* There are other __meta_kubernetes_* labels based on the Kubernetes metadadata, such as the namespace the pod is
+ running (__meta_kubernetes_namespace) or the name of the container inside the pod (__meta_kubernetes_pod_container_name)
+* The label __path__ is a special label which Promtail will read to find out where the log files are to be read in.
+
+The most important part of each entry is the *relabel_configs* which are a list of operations which creates,
+renames, modifies or alters labels. A single scrape_config can also reject logs by doing an "action: drop" which means
+that this particular scrape_config will not forward logs from a particular pod, but another scrape_config might.
+
+Many of the scrape_configs read labels from __meta_kubernetes_* meta-labels, assign them to intermediate labels
+such as __service__ based on a few different logic, possibly drop the processing if the __service__ was empty
+and finally set visible labels (such as "job") based on the __service__ label.
+
+In general, all of the default Promtail scrape_configs do the following:
+ * They read pod logs from under /var/log/pods/$1/*.log.
+ * They set "namespace" label directly from the __meta_kubernetes_namespace.
+ * They expect to see your pod name in the "name" label
+ * They set a "job" label which is roughly "your namespace/your job name"
+
+### Idioms and examples on different relabel_configs:
+
+* Drop the processing if a label is empty:
+```yaml
+ - action: drop
+ regex: ^$
+ source_labels:
+ - __service__
+```
+* Drop the processing if any of these labels contains a value:
+```yaml
+ - action: drop
+ regex: .+
+ separator: ''
+ source_labels:
+ - __meta_kubernetes_pod_label_name
+ - __meta_kubernetes_pod_label_app
+```
+* Rename a metadata label into anothe so that it will be visible in the final log stream:
+```yaml
+ - action: replace
+ source_labels:
+ - __meta_kubernetes_namespace
+ target_label: namespace
+```
+* Convert all of the Kubernetes pod labels into visible labels:
+```yaml
+ - action: labelmap
+ regex: __meta_kubernetes_pod_label_(.+)
+```
+
+
+Additional reading:
+ * https://www.slideshare.net/roidelapluie/taking-advantage-of-prometheus-relabeling-109483749
\ No newline at end of file
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index 9895661d4e2f1..04d5fd48df085 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -12,12 +12,14 @@ This can have several reasons:
- Restarting promtail will not necessarily resend log messages that have been read. To force sending all messages again, delete the positions file (default location `/tmp/positions.yaml`) or make sure new log messages are written after both promtail and Loki have started.
- Promtail is ignoring targets because of a configuration rule
- Detect this by turning on debug logging and then look for `dropping target, no labels` or `ignoring target` messages.
+- Promtail cannot find the location of your log files. Check that the scrape_configs contains valid path setting for finding the logs in your worker nodes.
+- Your pods are running but not with the labels Promtail is expecting. Check the Promtail scape_configs.
## Debug output
Both binaries support a log level parameter on the command-line, e.g.: `loki —log.level= debug ...`
-## No labels:
+## No labels:
## Failed to create target, "ioutil.ReadDir: readdirent: not a directory" | unknown | Improve documentation based on what I learned when I did loki setup.
* Explain that api needs the start parameter set if you want to see logs older than one hour
* Explain that S3 and DynamoDB can also use instance roles in EC2
* Add couple of hints how to debug Promtail
* Add documentation how the Promtail prometheus scrape_configs work |
da7acb49e56d5b053da3a54b0c3263679e45c6f1 | 2022-10-05 20:32:20 | Dylan Guedes | Loki: Add sharding support for negative/zeroed desired rate (#7342)
**What this PR does / why we need it**:
Adds the handling of negative/zeroed desired rate to our distributor
stream sharding. Without this, invalid desired rates results in a -Inf
number of shards. This also adds a log line to report that such a thing
happened.
Signed-off-by: Danny Kopping <danny.kopping@grafana.com>
Co-authored-by: Danny Kopping <danny.kopping@grafana.com> | false | diff --git a/pkg/distributor/distributor.go b/pkg/distributor/distributor.go
index 7dbd0ab497fe4..158e84c946238 100644
--- a/pkg/distributor/distributor.go
+++ b/pkg/distributor/distributor.go
@@ -9,6 +9,7 @@ import (
"strings"
"time"
+ "github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/prometheus/model/labels"
@@ -411,14 +412,16 @@ func min(x1, x2 int) int {
// N is the sharding size for the given stream. shardSteam returns the smaller
// streams and their associated keys for hashing to ingesters.
func (d *Distributor) shardStream(stream logproto.Stream, streamSize int, userID string) ([]uint32, []streamTracker) {
- shardCount := d.shardCountFor(&stream, streamSize, d.cfg.ShardStreams.DesiredRate.Val(), d.rateStore)
+ logger := log.With(util_log.WithUserID(userID, util_log.Logger), "stream", stream.Labels)
+
+ shardCount := d.shardCountFor(logger, &stream, streamSize, d.cfg.ShardStreams.DesiredRate.Val(), d.rateStore)
if shardCount <= 1 {
return []uint32{util.TokenFor(userID, stream.Labels)}, []streamTracker{{stream: stream}}
}
if d.cfg.ShardStreams.LoggingEnabled {
- level.Info(util_log.Logger).Log("msg", "sharding request", "stream", stream.Labels, "shard_count", shardCount)
+ level.Info(logger).Log("msg", "sharding request", "shard_count", shardCount)
}
streamLabels := labelTemplate(stream.Labels)
@@ -429,7 +432,7 @@ func (d *Distributor) shardStream(stream logproto.Stream, streamSize int, userID
for i := 0; i < shardCount; i++ {
shard, ok := d.createShard(stream, streamLabels, streamPattern, shardCount, i)
if !ok {
- level.Error(util_log.Logger).Log("msg", "couldn't create shard", "stream", stream.Labels, "idx", i)
+ level.Error(logger).Log("msg", "couldn't create shard", "idx", i)
continue
}
@@ -598,12 +601,19 @@ func (d *Distributor) parseStreamLabels(vContext validationContext, key string,
// based on the rate stored in the rate store and will store the new evaluated number of shards.
//
// desiredRate is expected to be given in bytes.
-func (d *Distributor) shardCountFor(stream *logproto.Stream, streamSize, desiredRate int, rateStore RateStore) int {
+func (d *Distributor) shardCountFor(logger log.Logger, stream *logproto.Stream, streamSize, desiredRate int, rateStore RateStore) int {
+ if desiredRate <= 0 {
+ if d.cfg.ShardStreams.LoggingEnabled {
+ level.Error(logger).Log("msg", "invalid desired rate", "desired_rate", desiredRate)
+ }
+ return 1
+ }
+
rate, err := rateStore.RateFor(stream)
if err != nil {
d.streamShardingFailures.WithLabelValues("rate_not_found").Inc()
if d.cfg.ShardStreams.LoggingEnabled {
- level.Error(util_log.Logger).Log("msg", "couldn't shard stream because rate wasn't found", "stream", stream.Labels)
+ level.Error(logger).Log("msg", "couldn't shard stream because rate store returned error", "err", err)
}
return 1
}
@@ -612,7 +622,7 @@ func (d *Distributor) shardCountFor(stream *logproto.Stream, streamSize, desired
if shards > len(stream.Entries) {
d.streamShardingFailures.WithLabelValues("too_many_shards").Inc()
if d.cfg.ShardStreams.LoggingEnabled {
- level.Error(util_log.Logger).Log("msg", "number of shards bigger than number of entries", "stream", stream.Labels, "shards", shards, "entries", len(stream.Entries))
+ level.Error(logger).Log("msg", "number of shards bigger than number of entries", "shards", shards, "entries", len(stream.Entries))
}
return len(stream.Entries)
}
diff --git a/pkg/distributor/distributor_test.go b/pkg/distributor/distributor_test.go
index 64ffdd27914fe..e8b5e9c9ef1b4 100644
--- a/pkg/distributor/distributor_test.go
+++ b/pkg/distributor/distributor_test.go
@@ -37,6 +37,7 @@ import (
"github.com/grafana/loki/pkg/runtime"
fe "github.com/grafana/loki/pkg/util/flagext"
loki_flagext "github.com/grafana/loki/pkg/util/flagext"
+ util_log "github.com/grafana/loki/pkg/util/log"
loki_net "github.com/grafana/loki/pkg/util/net"
"github.com/grafana/loki/pkg/util/test"
"github.com/grafana/loki/pkg/validation"
@@ -870,6 +871,24 @@ func TestShardCountFor(t *testing.T) {
wantShards int
wantErr bool
}{
+ {
+ name: "2 entries with zero rate and desired rate < 0, return 1 shard",
+ stream: &logproto.Stream{Hash: 1},
+ rate: 0,
+ desiredRate: -5, // in bytes
+ wantStreamSize: 2, // in bytes
+ wantShards: 1,
+ wantErr: false,
+ },
+ {
+ name: "2 entries with zero rate and desired rate == 0, return 1 shard",
+ stream: &logproto.Stream{Hash: 1},
+ rate: 0,
+ desiredRate: 0, // in bytes
+ wantStreamSize: 2, // in bytes
+ wantShards: 1,
+ wantErr: false,
+ },
{
name: "0 entries, return 0 shards always",
stream: &logproto.Stream{Hash: 1},
@@ -938,7 +957,7 @@ func TestShardCountFor(t *testing.T) {
d := &Distributor{
streamShardingFailures: shardingFailureMetric,
}
- got := d.shardCountFor(tc.stream, tc.wantStreamSize, tc.desiredRate, &noopRateStore{tc.rate})
+ got := d.shardCountFor(util_log.Logger, tc.stream, tc.wantStreamSize, tc.desiredRate, &noopRateStore{tc.rate})
require.Equal(t, tc.wantShards, got)
})
} | Loki | Add sharding support for negative/zeroed desired rate (#7342)
**What this PR does / why we need it**:
Adds the handling of negative/zeroed desired rate to our distributor
stream sharding. Without this, invalid desired rates results in a -Inf
number of shards. This also adds a log line to report that such a thing
happened.
Signed-off-by: Danny Kopping <danny.kopping@grafana.com>
Co-authored-by: Danny Kopping <danny.kopping@grafana.com> |
18392ea19c98167a292ab831246166f539b9827d | 2019-05-30 00:53:06 | Steven Sheehy | Switch Loki to StatefulSet (#585)
Signed-off-by: Steven Sheehy <ssheehy@firescope.com> | false | diff --git a/production/helm/README.md b/production/helm/README.md
index 19cadd3017cc4..8a4cd098660de 100644
--- a/production/helm/README.md
+++ b/production/helm/README.md
@@ -105,20 +105,17 @@ tls:
## How to contribute
-If you want to add any feature to helm chart, you can follow as below:
+After adding your new feature to the appropriate chart, you can build and deploy it locally to test:
```bash
-$ # do some changes to loki/promtail in the corresponding directory
$ make helm
$ helm upgrade --install loki ./loki-stack-*.tgz
```
-After verify changes, need to bump chart version.
-For example, if you update the loki chart, you need to bump the version as following:
+After verifying your changes, you need to bump the chart version following [semantic versioning](https://semver.org) rules.
+For example, if you update the loki chart, you need to bump the versions as follows:
-```bash
-$ # update version loki/Chart.yaml
-$ # update version loki-stack/Chart.yaml
-```
+- Update version loki/Chart.yaml
+- Update version loki-stack/Chart.yaml
You can use the `make helm-debug` to test and print out all chart templates. If you want to install helm (tiller) in your cluster use `make helm-install`, to install the current build in your Kubernetes cluster run `make helm-upgrade`.
diff --git a/production/helm/loki-stack/Chart.yaml b/production/helm/loki-stack/Chart.yaml
index 1f5b69491bfb3..1209a6ad8b40a 100644
--- a/production/helm/loki-stack/Chart.yaml
+++ b/production/helm/loki-stack/Chart.yaml
@@ -1,5 +1,5 @@
name: loki-stack
-version: 0.9.5
+version: 0.10.0
appVersion: 0.0.1
kubeVersion: "^1.10.0-0"
description: "Loki: like Prometheus, but for logs."
diff --git a/production/helm/loki/Chart.yaml b/production/helm/loki/Chart.yaml
index cf5aab6f2df34..d0ef9035a0572 100644
--- a/production/helm/loki/Chart.yaml
+++ b/production/helm/loki/Chart.yaml
@@ -1,5 +1,5 @@
name: loki
-version: 0.8.5
+version: 0.9.0
appVersion: 0.0.1
kubeVersion: "^1.10.0-0"
description: "Loki: like Prometheus, but for logs."
diff --git a/production/helm/loki/templates/pvc.yaml b/production/helm/loki/templates/pvc.yaml
deleted file mode 100644
index 350c04b120ee2..0000000000000
--- a/production/helm/loki/templates/pvc.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
-{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: {{ template "loki.fullname" . }}
- labels:
- app: {{ template "loki.name" . }}
- chart: {{ template "loki.chart" . }}
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
- annotations:
- {{- toYaml .Values.persistence.annotations | nindent 4 }}
-spec:
- accessModes:
- {{- range .Values.persistence.accessModes }}
- - {{ . | quote }}
- {{- end }}
- resources:
- requests:
- storage: {{ .Values.persistence.size | quote }}
- storageClassName: {{ .Values.persistence.storageClassName }}
-{{- end }}
diff --git a/production/helm/loki/templates/service-headless.yaml b/production/helm/loki/templates/service-headless.yaml
new file mode 100644
index 0000000000000..dbc127b3bea33
--- /dev/null
+++ b/production/helm/loki/templates/service-headless.yaml
@@ -0,0 +1,19 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ template "loki.fullname" . }}-headless
+ labels:
+ app: {{ template "loki.name" . }}
+ chart: {{ template "loki.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+spec:
+ clusterIP: None
+ ports:
+ - port: {{ .Values.service.port }}
+ protocol: TCP
+ name: http-metrics
+ targetPort: http-metrics
+ selector:
+ app: {{ template "loki.name" . }}
+ release: {{ .Release.Name }}
diff --git a/production/helm/loki/templates/deployment.yaml b/production/helm/loki/templates/statefulset.yaml
similarity index 77%
rename from production/helm/loki/templates/deployment.yaml
rename to production/helm/loki/templates/statefulset.yaml
index e8fe9be095c66..a1813b130aa21 100644
--- a/production/helm/loki/templates/deployment.yaml
+++ b/production/helm/loki/templates/statefulset.yaml
@@ -1,5 +1,5 @@
apiVersion: apps/v1
-kind: Deployment
+kind: StatefulSet
metadata:
name: {{ template "loki.fullname" . }}
labels:
@@ -10,17 +10,15 @@ metadata:
annotations:
{{- toYaml .Values.annotations | nindent 4 }}
spec:
+ podManagementPolicy: {{ .Values.podManagementPolicy }}
replicas: {{ .Values.replicas }}
- minReadySeconds: {{ .Values.minReadySeconds }}
selector:
matchLabels:
app: {{ template "loki.name" . }}
release: {{ .Release.Name }}
- strategy:
- type: {{ .Values.deploymentStrategy }}
- {{- if ne .Values.deploymentStrategy "RollingUpdate" }}
- rollingUpdate: null
- {{- end }}
+ serviceName: {{ template "loki.fullname" . }}-headless
+ updateStrategy:
+ {{- toYaml .Values.updateStrategy | nindent 4 }}
template:
metadata:
labels:
@@ -29,7 +27,7 @@ spec:
release: {{ .Release.Name }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
- {{- end }}
+ {{- end }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }}
@@ -50,7 +48,7 @@ spec:
- "-config.file=/etc/loki/loki.yaml"
{{- range $key, $value := .Values.extraArgs }}
- "-{{ $key }}={{ $value }}"
- {{- end }}
+ {{- end }}
volumeMounts:
- name: config
mountPath: /etc/loki
@@ -85,11 +83,25 @@ spec:
- name: config
secret:
secretName: {{ template "loki.fullname" . }}
+ {{- if not .Values.persistence.enabled }}
- name: storage
- {{- if .Values.persistence.enabled }}
- persistentVolumeClaim:
- claimName: {{ .Values.persistence.existingClaim | default (include "loki.fullname" .) }}
- {{- else }}
emptyDir: {}
- {{- end }}
+ {{- else if .Values.persistence.existingClaim }}
+ - name: storage
+ persistentVolumeClaim:
+ claimName: {{ .Values.persistence.existingClaim }}
+ {{- else }}
+ volumeClaimTemplates:
+ - metadata:
+ name: storage
+ annotations:
+ {{- toYaml .Values.persistence.annotations | nindent 8 }}
+ spec:
+ accessModes:
+ {{- toYaml .Values.persistence.accessModes | nindent 8 }}
+ resources:
+ requests:
+ storage: {{ .Values.persistence.size | quote }}
+ storageClassName: {{ .Values.persistence.storageClassName }}
+ {{- end }}
diff --git a/production/helm/loki/values.yaml b/production/helm/loki/values.yaml
index 55eed6279190e..c1825e85b9876 100644
--- a/production/helm/loki/values.yaml
+++ b/production/helm/loki/values.yaml
@@ -11,7 +11,7 @@ affinity: {}
# - loki
# topologyKey: "kubernetes.io/hostname"
-## Deployment annotations
+## StatefulSet annotations
annotations: {}
# enable tracing for debug, need install jaeger and specify right jaeger_agent_host
@@ -20,7 +20,6 @@ tracing:
config:
auth_enabled: false
-
ingester:
chunk_idle_period: 15m
chunk_block_size: 262144
@@ -63,8 +62,6 @@ config:
retention_deletes_enabled: false
retention_period: 0
-deploymentStrategy: RollingUpdate
-
image:
repository: grafana/loki
tag: latest
@@ -80,8 +77,6 @@ livenessProbe:
port: http-metrics
initialDelaySeconds: 45
-minReadySeconds: 0
-
## Enable persistence using Persistent Volume Claims
networkPolicy:
enabled: false
@@ -111,6 +106,8 @@ podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "http-metrics"
+podManagementPolicy: OrderedReady
+
## Assign a PriorityClassName to pods if set
# priorityClassName:
@@ -162,3 +159,6 @@ tolerations: []
podDisruptionBudget: {}
# minAvailable: 1
# maxUnavailable: 1
+
+updateStrategy:
+ type: RollingUpdate | unknown | Switch Loki to StatefulSet (#585)
Signed-off-by: Steven Sheehy <ssheehy@firescope.com> |
9ddc756c1d18fff4c9f91b560a688e15292f9be4 | 2025-02-04 20:28:26 | renovate[bot] | fix(deps): update module golang.org/x/oauth2 to v0.26.0 (main) (#16085)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> | false | diff --git a/go.mod b/go.mod
index f6b7dedfb55b7..b99581f1ed29d 100644
--- a/go.mod
+++ b/go.mod
@@ -145,7 +145,7 @@ require (
github.com/willf/bloom v2.0.3+incompatible
go.opentelemetry.io/collector/pdata v1.25.0
go4.org/netipx v0.0.0-20230125063823-8449b0a6169f
- golang.org/x/oauth2 v0.25.0
+ golang.org/x/oauth2 v0.26.0
golang.org/x/text v0.21.0
google.golang.org/protobuf v1.36.4
gotest.tools v2.2.0+incompatible
diff --git a/go.sum b/go.sum
index ce878f9db4506..d2cb7fc558638 100644
--- a/go.sum
+++ b/go.sum
@@ -1366,8 +1366,8 @@ golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4Iltr
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
-golang.org/x/oauth2 v0.25.0 h1:CY4y7XT9v0cRI9oupztF8AgiIu99L/ksR/Xp/6jrZ70=
-golang.org/x/oauth2 v0.25.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
+golang.org/x/oauth2 v0.26.0 h1:afQXWNNaeC4nvZ0Ed9XvCCzXM6UHJG7iCg0W4fPqSBE=
+golang.org/x/oauth2 v0.26.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
diff --git a/vendor/golang.org/x/oauth2/google/default.go b/vendor/golang.org/x/oauth2/google/default.go
index df958359a8706..0260935bab745 100644
--- a/vendor/golang.org/x/oauth2/google/default.go
+++ b/vendor/golang.org/x/oauth2/google/default.go
@@ -251,6 +251,12 @@ func FindDefaultCredentials(ctx context.Context, scopes ...string) (*Credentials
// a Google Developers service account key file, a gcloud user credentials file (a.k.a. refresh
// token JSON), or the JSON configuration file for workload identity federation in non-Google cloud
// platforms (see https://cloud.google.com/iam/docs/how-to#using-workload-identity-federation).
+//
+// Important: If you accept a credential configuration (credential JSON/File/Stream) from an
+// external source for authentication to Google Cloud Platform, you must validate it before
+// providing it to any Google API or library. Providing an unvalidated credential configuration to
+// Google APIs can compromise the security of your systems and data. For more information, refer to
+// [Validate credential configurations from external sources](https://cloud.google.com/docs/authentication/external/externally-sourced-credentials).
func CredentialsFromJSONWithParams(ctx context.Context, jsonData []byte, params CredentialsParams) (*Credentials, error) {
// Make defensive copy of the slices in params.
params = params.deepCopy()
@@ -294,6 +300,12 @@ func CredentialsFromJSONWithParams(ctx context.Context, jsonData []byte, params
}
// CredentialsFromJSON invokes CredentialsFromJSONWithParams with the specified scopes.
+//
+// Important: If you accept a credential configuration (credential JSON/File/Stream) from an
+// external source for authentication to Google Cloud Platform, you must validate it before
+// providing it to any Google API or library. Providing an unvalidated credential configuration to
+// Google APIs can compromise the security of your systems and data. For more information, refer to
+// [Validate credential configurations from external sources](https://cloud.google.com/docs/authentication/external/externally-sourced-credentials).
func CredentialsFromJSON(ctx context.Context, jsonData []byte, scopes ...string) (*Credentials, error) {
var params CredentialsParams
params.Scopes = scopes
diff --git a/vendor/golang.org/x/oauth2/google/externalaccount/basecredentials.go b/vendor/golang.org/x/oauth2/google/externalaccount/basecredentials.go
index ee34924e301b1..fc106347d85c5 100644
--- a/vendor/golang.org/x/oauth2/google/externalaccount/basecredentials.go
+++ b/vendor/golang.org/x/oauth2/google/externalaccount/basecredentials.go
@@ -278,20 +278,52 @@ type Format struct {
type CredentialSource struct {
// File is the location for file sourced credentials.
// One field amongst File, URL, Executable, or EnvironmentID should be provided, depending on the kind of credential in question.
+ //
+ // Important: If you accept a credential configuration (credential
+ // JSON/File/Stream) from an external source for authentication to Google
+ // Cloud Platform, you must validate it before providing it to any Google
+ // API or library. Providing an unvalidated credential configuration to
+ // Google APIs can compromise the security of your systems and data. For
+ // more information, refer to [Validate credential configurations from
+ // external sources](https://cloud.google.com/docs/authentication/external/externally-sourced-credentials).
File string `json:"file"`
// Url is the URL to call for URL sourced credentials.
// One field amongst File, URL, Executable, or EnvironmentID should be provided, depending on the kind of credential in question.
+ //
+ // Important: If you accept a credential configuration (credential
+ // JSON/File/Stream) from an external source for authentication to Google
+ // Cloud Platform, you must validate it before providing it to any Google
+ // API or library. Providing an unvalidated credential configuration to
+ // Google APIs can compromise the security of your systems and data. For
+ // more information, refer to [Validate credential configurations from
+ // external sources](https://cloud.google.com/docs/authentication/external/externally-sourced-credentials).
URL string `json:"url"`
// Headers are the headers to attach to the request for URL sourced credentials.
Headers map[string]string `json:"headers"`
// Executable is the configuration object for executable sourced credentials.
// One field amongst File, URL, Executable, or EnvironmentID should be provided, depending on the kind of credential in question.
+ //
+ // Important: If you accept a credential configuration (credential
+ // JSON/File/Stream) from an external source for authentication to Google
+ // Cloud Platform, you must validate it before providing it to any Google
+ // API or library. Providing an unvalidated credential configuration to
+ // Google APIs can compromise the security of your systems and data. For
+ // more information, refer to [Validate credential configurations from
+ // external sources](https://cloud.google.com/docs/authentication/external/externally-sourced-credentials).
Executable *ExecutableConfig `json:"executable"`
// EnvironmentID is the EnvironmentID used for AWS sourced credentials. This should start with "AWS".
// One field amongst File, URL, Executable, or EnvironmentID should be provided, depending on the kind of credential in question.
+ //
+ // Important: If you accept a credential configuration (credential
+ // JSON/File/Stream) from an external source for authentication to Google
+ // Cloud Platform, you must validate it before providing it to any Google
+ // API or library. Providing an unvalidated credential configuration to
+ // Google APIs can compromise the security of your systems and data. For
+ // more information, refer to [Validate credential configurations from
+ // external sources](https://cloud.google.com/docs/authentication/external/externally-sourced-credentials).
EnvironmentID string `json:"environment_id"`
// RegionURL is the metadata URL to retrieve the region from for EC2 AWS credentials.
RegionURL string `json:"region_url"`
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 4f92f6e5f9ae5..cabbd0a33109d 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -1933,7 +1933,7 @@ golang.org/x/net/netutil
golang.org/x/net/proxy
golang.org/x/net/publicsuffix
golang.org/x/net/trace
-# golang.org/x/oauth2 v0.25.0
+# golang.org/x/oauth2 v0.26.0
## explicit; go 1.18
golang.org/x/oauth2
golang.org/x/oauth2/authhandler | fix | update module golang.org/x/oauth2 to v0.26.0 (main) (#16085)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> |
2d079340c3259138bd1745ecb736f5c70b4bb060 | 2021-03-09 02:30:11 | Owen Diehl | uses go_memstats_heap_inuse_bytes on the operational dashboard (#3447) | false | diff --git a/production/loki-mixin/dashboards/dashboard-loki-operational.json b/production/loki-mixin/dashboards/dashboard-loki-operational.json
index 96d54cfc68283..71281ad7137b7 100644
--- a/production/loki-mixin/dashboards/dashboard-loki-operational.json
+++ b/production/loki-mixin/dashboards/dashboard-loki-operational.json
@@ -2023,7 +2023,7 @@
"steppedLine": false,
"targets": [
{
- "expr": "container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", pod=~\"distributor.*\"}",
+ "expr": "go_memstats_heap_inuse_bytes{cluster=\"$cluster\", namespace=\"$namespace\", pod=~\"distributor.*\"}",
"instant": false,
"intervalFactor": 3,
"legendFormat": "{{pod}}",
@@ -2687,7 +2687,7 @@
"steppedLine": false,
"targets": [
{
- "expr": "container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", pod=~\"ingester.*\"}",
+ "expr": "go_memstats_heap_inuse_bytes{cluster=\"$cluster\", namespace=\"$namespace\", pod=~\"ingester.*\"}",
"instant": false,
"intervalFactor": 3,
"legendFormat": "{{pod}}",
@@ -3633,7 +3633,7 @@
"steppedLine": false,
"targets": [
{
- "expr": "container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", pod=~\"querier.*\"}",
+ "expr": "go_memstats_heap_inuse_bytes{cluster=\"$cluster\", namespace=\"$namespace\", pod=~\"querier.*\"}",
"instant": false,
"intervalFactor": 3,
"legendFormat": "{{pod}}", | unknown | uses go_memstats_heap_inuse_bytes on the operational dashboard (#3447) |
40d7b979a4c08871c7d85575a5e3e51f2c8704ce | 2025-02-28 22:25:19 | Trevor Whitney | chore: implement multi-variant queries (#16426) | false | diff --git a/cmd/loki/loki-local-config.yaml b/cmd/loki/loki-local-config.yaml
index 4aff8772ae1da..e58cc2b540939 100644
--- a/cmd/loki/loki-local-config.yaml
+++ b/cmd/loki/loki-local-config.yaml
@@ -49,6 +49,11 @@ ruler:
frontend:
encoding: protobuf
+querier:
+ engine:
+ enable_multi_variant_queries: true
+
+
# By default, Loki will send anonymous, but uniquely-identifiable usage and configuration
# analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/
#
diff --git a/pkg/chunkenc/memchunk.go b/pkg/chunkenc/memchunk.go
index 1ff01a2fe5998..888c5ecb8e1a1 100644
--- a/pkg/chunkenc/memchunk.go
+++ b/pkg/chunkenc/memchunk.go
@@ -1717,7 +1717,7 @@ func newSampleIterator(
}
if len(extractors) > 1 {
- return newMultiExtractorSampleIterator(ctx, pool, b, format, extractors, symbolizer)
+ return newMultiExtractorSampleIterator(ctx, pool, b, format, symbolizer, extractors...)
}
return &sampleBufferedIterator{
diff --git a/pkg/chunkenc/memchunk_test.go b/pkg/chunkenc/memchunk_test.go
index be2c2b8bd1f55..fb9f8078c9810 100644
--- a/pkg/chunkenc/memchunk_test.go
+++ b/pkg/chunkenc/memchunk_test.go
@@ -1157,7 +1157,7 @@ func BenchmarkHeadBlockSampleIterator(b *testing.B) {
}
}
-func BenchmarkHeadBlockMultiExtractorSampleIterator(b *testing.B) {
+func BenchmarkHeadBlockSampleIterator_WithMultipleExtractors(b *testing.B) {
for _, j := range []int{20000, 10000, 8000, 5000} {
for _, withStructuredMetadata := range []bool{false, true} {
b.Run(fmt.Sprintf("size=%d structuredMetadata=%v", j, withStructuredMetadata), func(b *testing.B) {
diff --git a/pkg/chunkenc/variants.go b/pkg/chunkenc/variants.go
index 6968e82649993..ffdd4dbdcbf2a 100644
--- a/pkg/chunkenc/variants.go
+++ b/pkg/chunkenc/variants.go
@@ -14,7 +14,14 @@ import (
"github.com/grafana/loki/v3/pkg/logqlmodel/stats"
)
-func newMultiExtractorSampleIterator(ctx context.Context, pool compression.ReaderPool, b []byte, format byte, extractors []log.StreamSampleExtractor, symbolizer *symbolizer) iter.SampleIterator {
+func newMultiExtractorSampleIterator(
+ ctx context.Context,
+ pool compression.ReaderPool,
+ b []byte,
+ format byte,
+ symbolizer *symbolizer,
+ extractors ...log.StreamSampleExtractor,
+) iter.SampleIterator {
return &multiExtractorSampleBufferedIterator{
bufferedIterator: newBufferedIterator(ctx, pool, b, format, symbolizer),
extractors: extractors,
diff --git a/pkg/ingester/instance_test.go b/pkg/ingester/instance_test.go
index b91098f19a92d..3050b6e09f94e 100644
--- a/pkg/ingester/instance_test.go
+++ b/pkg/ingester/instance_test.go
@@ -876,6 +876,49 @@ func Test_ExtractorWrapper(t *testing.T) {
wrapper.extractor.sp.called,
) // we've passed every log line through the wrapper
})
+ t.Run("variants", func(t *testing.T) {
+ instance := defaultInstance(t)
+
+ wrapper := &testExtractorWrapper{
+ extractor: newMockExtractor(),
+ }
+ instance.extractorWrapper = wrapper
+
+ ctx := user.InjectOrgID(context.Background(), "test-user")
+ it, err := instance.QuerySample(ctx,
+ logql.SelectSampleParams{
+ SampleQueryRequest: &logproto.SampleQueryRequest{
+ Selector: `variants(sum(count_over_time({job="3"}[1m]))) of ({job="3"[1m]})`,
+ Start: time.Unix(0, 0),
+ End: time.Unix(0, 100000000),
+ Shards: []string{astmapper.ShardAnnotation{Shard: 0, Of: 1}.String()},
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(
+ `variants(sum(count_over_time({job="3"}[1m]))) of ({job="3"}[1m])`,
+ ),
+ },
+ },
+ },
+ )
+ require.NoError(t, err)
+ defer it.Close()
+
+ for it.Next() {
+ // Consume the iterator
+ require.NoError(t, it.Err())
+ }
+
+ require.Equal(
+ t,
+ `variants(sum(count_over_time({job="3"}[1m]))) of ({job="3"}[1m])`,
+ wrapper.query,
+ )
+ require.Equal(
+ t,
+ 10,
+ wrapper.extractor.sp.called,
+ ) // we've passed every log line through the wrapper
+ })
}
func Test_ExtractorWrapper_disabled(t *testing.T) {
@@ -913,6 +956,37 @@ func Test_ExtractorWrapper_disabled(t *testing.T) {
require.Equal(t, ``, wrapper.query)
require.Equal(t, 0, wrapper.extractor.sp.called) // we've passed every log line through the wrapper
})
+
+ t.Run("variants", func(t *testing.T) {
+ ctx := user.InjectOrgID(context.Background(), "test-user")
+ ctx = httpreq.InjectHeader(ctx, httpreq.LokiDisablePipelineWrappersHeader, "true")
+ it, err := instance.QuerySample(ctx,
+ logql.SelectSampleParams{
+ SampleQueryRequest: &logproto.SampleQueryRequest{
+ Selector: `variants(sum(count_over_time({job="3"}[1m]))) of ({job="3"[1m]})`,
+ Start: time.Unix(0, 0),
+ End: time.Unix(0, 100000000),
+ Shards: []string{astmapper.ShardAnnotation{Shard: 0, Of: 1}.String()},
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(
+ `variants(sum(count_over_time({job="3"}[1m]))) of ({job="3"}[1m])`,
+ ),
+ },
+ },
+ },
+ )
+ require.NoError(t, err)
+ defer it.Close()
+
+ for it.Next() {
+ // Consume the iterator
+ require.NoError(t, it.Err())
+ }
+
+ require.Equal(t, ``, wrapper.query)
+ require.Equal(t, 0, wrapper.extractor.sp.called) // we've passed every log line through the wrapper
+
+ })
}
type testExtractorWrapper struct {
@@ -1059,6 +1133,49 @@ func Test_QuerySampleWithDelete(t *testing.T) {
require.Equal(t, samples, []float64{1.})
}
+func Test_QueryVariantsWithDelete(t *testing.T) {
+ instance := defaultInstance(t)
+
+ it, err := instance.QuerySample(context.TODO(),
+ logql.SelectSampleParams{
+ SampleQueryRequest: &logproto.SampleQueryRequest{
+ Selector: `variants(count_over_time({job="3"}[5m])) of ({job="3"}[5m])`,
+ Start: time.Unix(0, 0),
+ End: time.Unix(0, 110000000),
+ Deletes: []*logproto.Delete{
+ {
+ Selector: `{log_stream="worker"}`,
+ Start: 0,
+ End: 10 * 1e6,
+ },
+ {
+ Selector: `{log_stream="dispatcher"}`,
+ Start: 0,
+ End: 5 * 1e6,
+ },
+ {
+ Selector: `{log_stream="dispatcher"} |= "9"`,
+ Start: 0,
+ End: 10 * 1e6,
+ },
+ },
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`variants(count_over_time({job="3"}[5m])) of ({job="3"}[5m])`),
+ },
+ },
+ },
+ )
+ require.NoError(t, err)
+ defer it.Close()
+
+ var samples []float64
+ for it.Next() {
+ samples = append(samples, it.At().Value)
+ }
+
+ require.Equal(t, samples, []float64{1.})
+}
+
type fakeLimits struct {
limits map[string]*validation.Limits
}
diff --git a/pkg/logql/downstream.go b/pkg/logql/downstream.go
index c52d7dea6a043..924afc896bc44 100644
--- a/pkg/logql/downstream.go
+++ b/pkg/logql/downstream.go
@@ -695,6 +695,15 @@ func (ev *DownstreamEvaluator) NewStepEvaluator(
}
}
+func (ev *DownstreamEvaluator) NewVariantsStepEvaluator(
+ _ context.Context,
+ _ syntax.VariantsExpr,
+ _ Params,
+) (StepEvaluator, error) {
+ // TODO(twhitney): does the downstream evaluator need to handle variants?
+ return nil, errors.New("NewVariantStepEvaluator hasn't been implemented on DownstreamEvaluator")
+}
+
// NewIterator returns the iter.EntryIterator for a given LogSelectorExpr
func (ev *DownstreamEvaluator) NewIterator(
ctx context.Context,
diff --git a/pkg/logql/engine.go b/pkg/logql/engine.go
index a828ff114463f..201a2ba58fa70 100644
--- a/pkg/logql/engine.go
+++ b/pkg/logql/engine.go
@@ -328,6 +328,14 @@ func (q *query) Eval(ctx context.Context) (promql_parser.Value, error) {
}
switch e := q.params.GetExpression().(type) {
+ // A VariantsExpr is a specific type of SampleExpr, so make sure this case is evaulated first
+ case syntax.VariantsExpr:
+ if !q.multiVariant {
+ return nil, logqlmodel.ErrVariantsDisabled
+ }
+
+ value, err := q.evalVariants(ctx, e)
+ return value, err
case syntax.SampleExpr:
value, err := q.evalSample(ctx, e)
return value, err
@@ -346,12 +354,6 @@ func (q *query) Eval(ctx context.Context) (promql_parser.Value, error) {
defer util.LogErrorWithContext(ctx, "closing iterator", itr.Close)
streams, err := readStreams(itr, q.params.Limit(), q.params.Direction(), q.params.Interval())
return streams, err
- case syntax.VariantsExpr:
- if !q.multiVariant {
- return nil, logqlmodel.ErrVariantsDisabled
- }
-
- return nil, errors.New("variants not yet implemented")
default:
return nil, fmt.Errorf("unexpected type (%T): cannot evaluate", e)
}
@@ -640,3 +642,64 @@ type groupedAggregation struct {
heap vectorByValueHeap
reverseHeap vectorByReverseValueHeap
}
+
+func (q *query) evalVariants(
+ ctx context.Context,
+ expr syntax.VariantsExpr,
+) (promql_parser.Value, error) {
+ tenantIDs, err := tenant.TenantIDs(ctx)
+ if err != nil {
+ return nil, err
+ }
+
+ maxIntervalCapture := func(id string) time.Duration { return q.limits.MaxQueryRange(ctx, id) }
+ maxQueryInterval := validation.SmallestPositiveNonZeroDurationPerTenant(
+ tenantIDs,
+ maxIntervalCapture,
+ )
+ if maxQueryInterval != 0 {
+ for i, v := range expr.Variants() {
+ err = q.checkIntervalLimit(v, maxQueryInterval)
+ if err != nil {
+ return nil, err
+ }
+
+ vExpr, err := optimizeSampleExpr(v)
+ if err != nil {
+ return nil, err
+ }
+
+ if err = expr.SetVariant(i, vExpr); err != nil {
+ return nil, err
+ }
+ }
+ }
+
+ stepEvaluator, err := q.evaluator.NewVariantsStepEvaluator(ctx, expr, q.params)
+ if err != nil {
+ return nil, err
+ }
+ defer util.LogErrorWithContext(ctx, "closing VariantsExpr", stepEvaluator.Close)
+
+ next, _, r := stepEvaluator.Next()
+ if stepEvaluator.Error() != nil {
+ return nil, stepEvaluator.Error()
+ }
+
+ if next && r != nil {
+ switch vec := r.(type) {
+ case SampleVector:
+ maxSeriesCapture := func(id string) int { return q.limits.MaxQuerySeries(ctx, id) }
+ maxSeries := validation.SmallestPositiveIntPerTenant(tenantIDs, maxSeriesCapture)
+ // TDOO(twhitney): what is merge first last for?
+ mfl := false
+ // if rae, ok := expr.(*syntax.RangeAggregationExpr); ok && (rae.Operation == syntax.OpRangeTypeFirstWithTimestamp || rae.Operation == syntax.OpRangeTypeLastWithTimestamp) {
+ // mfl = true
+ // }
+ return q.JoinSampleVector(next, vec, stepEvaluator, maxSeries, mfl)
+ default:
+ return nil, fmt.Errorf("unsupported result type: %T", r)
+ }
+ }
+ return nil, errors.New("unexpected empty result")
+}
diff --git a/pkg/logql/engine_test.go b/pkg/logql/engine_test.go
index 9b4c241a1c765..6f2906d727097 100644
--- a/pkg/logql/engine_test.go
+++ b/pkg/logql/engine_test.go
@@ -51,6 +51,8 @@ func TestEngine_checkIntervalLimit(t *testing.T) {
{query: `rate({app="foo"} [1h])`, expErr: "[1h] > [10m]"},
{query: `sum(rate({app="foo"} [1h]))`, expErr: "[1h] > [10m]"},
{query: `sum_over_time({app="foo"} |= "foo" | json | unwrap bar [1h])`, expErr: "[1h] > [10m]"},
+ {query: `variants(rate({app="foo"}[5m])) of ({app="foo"}[5m])`, expErr: ""},
+ {query: `variants(rate({app="foo"}[1h])) of ({app="foo"}[1h])`, expErr: "[1h] > [10m]"},
} {
for _, downstream := range []bool{true, false} {
t.Run(fmt.Sprintf("%v/downstream=%v", tc.query, downstream), func(t *testing.T) {
@@ -2309,6 +2311,360 @@ func TestEngine_RangeQuery(t *testing.T) {
}
}
+func TestEngine_Variants_InstantQuery(t *testing.T) {
+ t.Parallel()
+ for _, test := range []struct {
+ qs string
+ ts time.Time
+ direction logproto.Direction
+ limit uint32
+
+ // an array of data per params will be returned by the querier.
+ // This is to cover logql that requires multiple queries.
+ data interface{}
+ params interface{}
+
+ expected interface{}
+ }{
+ {
+ `variants(bytes_over_time({app="foo"}[1m]), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`,
+ time.Unix(60, 0),
+ logproto.BACKWARD,
+ 0,
+ [][]logproto.Series{
+ {newSeries(testSize, identity, `{app="foo"}`)},
+ },
+ []SelectSampleParams{
+ {
+ &logproto.SampleQueryRequest{
+ Selector: `variants(bytes_over_time({app="foo"}[1m]), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`,
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`variants(bytes_over_time({app="foo"}[1m]), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`),
+ },
+ Start: time.Unix(0, 0),
+ End: time.Unix(60, 0),
+ },
+ },
+ },
+ promql.Vector{
+ promql.Sample{T: 60 * 1000, F: 60, Metric: labels.FromStrings("__variant__", "0", "app", "foo")},
+ promql.Sample{T: 60 * 1000, F: 60, Metric: labels.FromStrings("__variant__", "1", "app", "foo")},
+ },
+ },
+ {
+ `variants(sum by (app) (bytes_over_time({app="foo"}[1m])), sum by (app) (count_over_time({app="foo"}[1m]))) of ({app="foo"}[1m])`,
+ time.Unix(60, 0),
+ logproto.BACKWARD,
+ 0,
+ [][]logproto.Series{
+ {
+ newSeries(testSize, identity, `{app="foo", foo="bar"}`),
+ newSeries(testSize, identity, `{app="foo", foo="baz"}`),
+ },
+ },
+ []SelectSampleParams{
+ {
+ &logproto.SampleQueryRequest{
+ Selector: `variants(sum by (app) (bytes_over_time({app="foo"}[1m])), sum by (app) (count_over_time({app="foo"}[1m]))) of ({app="foo"}[1m])`,
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`variants(sum by (app) (bytes_over_time({app="foo"}[1m])), sum by (app) (count_over_time({app="foo"}[1m]))) of ({app="foo"}[1m])`),
+ },
+ Start: time.Unix(0, 0),
+ End: time.Unix(60, 0),
+ },
+ },
+ },
+ promql.Vector{
+ promql.Sample{T: 60 * 1000, F: 120, Metric: labels.FromStrings("__variant__", "0", "app", "foo")},
+ promql.Sample{T: 60 * 1000, F: 120, Metric: labels.FromStrings("__variant__", "1", "app", "foo")},
+ },
+ },
+ {
+ `variants(bytes_over_time({app="foo"}[1m]), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`,
+ time.Unix(60, 0),
+ logproto.BACKWARD,
+ 0,
+ [][]logproto.Series{
+ {
+ newSeries(testSize, identity, `{app="foo", foo="bar"}`),
+ newSeries(testSize, identity, `{app="foo", foo="baz"}`),
+ },
+ },
+ []SelectSampleParams{
+ {
+ &logproto.SampleQueryRequest{
+ Selector: `variants(bytes_over_time({app="foo"}[1m]), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`,
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`variants(bytes_over_time({app="foo"}[1m]), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`),
+ },
+ Start: time.Unix(0, 0),
+ End: time.Unix(60, 0),
+ },
+ },
+ },
+ promql.Vector{
+ promql.Sample{T: 60 * 1000, F: 60, Metric: labels.FromStrings("__variant__", "0", "app", "foo", "foo", "bar")},
+ promql.Sample{T: 60 * 1000, F: 60, Metric: labels.FromStrings("__variant__", "0", "app", "foo", "foo", "baz")},
+ promql.Sample{T: 60 * 1000, F: 60, Metric: labels.FromStrings("__variant__", "1", "app", "foo", "foo", "bar")},
+ promql.Sample{T: 60 * 1000, F: 60, Metric: labels.FromStrings("__variant__", "1", "app", "foo", "foo", "baz")},
+ },
+ },
+ {
+ `variants(sum by (app) (bytes_over_time({app="foo"}[1m])), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`,
+ time.Unix(60, 0),
+ logproto.BACKWARD,
+ 0,
+ [][]logproto.Series{
+ {
+ newSeries(testSize, identity, `{app="foo", foo="bar"}`),
+ newSeries(testSize, identity, `{app="foo", foo="baz"}`),
+ },
+ },
+ []SelectSampleParams{
+ {
+ &logproto.SampleQueryRequest{
+ Selector: `variants(sum by (app) (bytes_over_time({app="foo"}[1m])), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`,
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`variants(sum by (app) (bytes_over_time({app="foo"}[1m])), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`),
+ },
+ Start: time.Unix(0, 0),
+ End: time.Unix(60, 0),
+ },
+ },
+ },
+ promql.Vector{
+ promql.Sample{T: 60 * 1000, F: 120, Metric: labels.FromStrings("__variant__", "0", "app", "foo")},
+ promql.Sample{T: 60 * 1000, F: 60, Metric: labels.FromStrings("__variant__", "1", "app", "foo", "foo", "bar")},
+ promql.Sample{T: 60 * 1000, F: 60, Metric: labels.FromStrings("__variant__", "1", "app", "foo", "foo", "baz")},
+ },
+ },
+ } {
+ t.Run(fmt.Sprintf("%s %s", test.qs, test.direction), func(t *testing.T) {
+ eng := NewEngine(
+ EngineOpts{
+ EnableMutiVariantQueries: true,
+ },
+ newQuerierRecorder(t, test.data, test.params),
+ NoLimits,
+ log.NewNopLogger(),
+ )
+
+ params, err := NewLiteralParams(
+ test.qs,
+ test.ts,
+ test.ts,
+ 0,
+ 0,
+ test.direction,
+ test.limit,
+ nil,
+ nil,
+ )
+ require.NoError(t, err)
+ q := eng.Query(params)
+ res, err := q.Exec(user.InjectOrgID(context.Background(), "fake"))
+ if expectedError, ok := test.expected.(error); ok {
+ assert.Equal(t, expectedError.Error(), err.Error())
+ } else {
+ if err != nil {
+ t.Fatal(err)
+ }
+ assert.Equal(t, test.expected, res.Data)
+ }
+ })
+ }
+}
+
+func TestEngine_Variants_RangeQuery(t *testing.T) {
+ t.Parallel()
+ for _, test := range []struct {
+ qs string
+ start time.Time
+ end time.Time
+ step time.Duration
+ interval time.Duration
+ direction logproto.Direction
+ limit uint32
+
+ // an array of streams per SelectParams will be returned by the querier.
+ // This is to cover logql that requires multiple queries.
+ data interface{}
+ params interface{}
+
+ expected promql_parser.Value
+ }{
+ {
+ `variants(bytes_over_time({app="foo"}[1m]), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`,
+ time.Unix(60, 0), time.Unix(120, 0), time.Minute, 0, logproto.FORWARD, 10,
+ [][]logproto.Series{
+ {newSeries(testSize, identity, `{app="foo"}`)},
+ },
+ []SelectSampleParams{
+ {
+ &logproto.SampleQueryRequest{
+ Selector: `variants(bytes_over_time({app="foo"}[1m]), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`,
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`variants(bytes_over_time({app="foo"}[1m]), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`),
+ },
+ Start: time.Unix(0, 0),
+ End: time.Unix(120, 0),
+ },
+ },
+ },
+ promql.Matrix{
+ promql.Series{
+ Metric: labels.FromStrings("__variant__", "0", "app", "foo"),
+ Floats: []promql.FPoint{{T: 60 * 1000, F: 60}, {T: 120 * 1000, F: 60}},
+ },
+ promql.Series{
+ Metric: labels.FromStrings("__variant__", "1", "app", "foo"),
+ Floats: []promql.FPoint{{T: 60 * 1000, F: 60}, {T: 120 * 1000, F: 60}},
+ },
+ },
+ },
+ {
+ `variants(sum by (app) (bytes_over_time({app="foo"}[1m])), sum by (app) (count_over_time({app="foo"}[1m]))) of ({app="foo"}[1m])`,
+ time.Unix(60, 0), time.Unix(120, 0), time.Minute, 0, logproto.BACKWARD, 10,
+ [][]logproto.Series{
+ {
+ newSeries(testSize, identity, `{app="foo", foo="bar"}`),
+ newSeries(testSize, identity, `{app="foo", foo="baz"}`),
+ },
+ },
+ []SelectSampleParams{
+ {
+ &logproto.SampleQueryRequest{
+ Selector: `variants(sum by (app) (bytes_over_time({app="foo"}[1m])), sum by (app) (count_over_time({app="foo"}[1m]))) of ({app="foo"}[1m])`,
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`variants(sum by (app) (bytes_over_time({app="foo"}[1m])), sum by (app) (count_over_time({app="foo"}[1m]))) of ({app="foo"}[1m])`),
+ },
+ Start: time.Unix(0, 0),
+ End: time.Unix(60, 0),
+ },
+ },
+ },
+ promql.Matrix{
+ promql.Series{
+ Metric: labels.FromStrings("__variant__", "0", "app", "foo"),
+ Floats: []promql.FPoint{{T: 60 * 1000, F: 120}, {T: 120 * 1000, F: 120}},
+ },
+ promql.Series{
+ Metric: labels.FromStrings("__variant__", "1", "app", "foo"),
+ Floats: []promql.FPoint{{T: 60 * 1000, F: 120}, {T: 120 * 1000, F: 120}},
+ },
+ },
+ },
+ {
+ `variants(bytes_over_time({app="foo"}[1m]), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`,
+ time.Unix(60, 0), time.Unix(120, 0), time.Minute, 0, logproto.BACKWARD, 10,
+ [][]logproto.Series{
+ {
+ newSeries(testSize, identity, `{app="foo", foo="bar"}`),
+ newSeries(testSize, identity, `{app="foo", foo="baz"}`),
+ },
+ },
+ []SelectSampleParams{
+ {
+ &logproto.SampleQueryRequest{
+ Selector: `variants(bytes_over_time({app="foo"}[1m]), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`,
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`variants(bytes_over_time({app="foo"}[1m]), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`),
+ },
+ Start: time.Unix(0, 0),
+ End: time.Unix(60, 0),
+ },
+ },
+ },
+ promql.Matrix{
+ promql.Series{
+ Metric: labels.FromStrings("__variant__", "0", "app", "foo", "foo", "bar"),
+ Floats: []promql.FPoint{{T: 60 * 1000, F: 60}, {T: 120 * 1000, F: 60}},
+ },
+ promql.Series{
+ Metric: labels.FromStrings("__variant__", "0", "app", "foo", "foo", "baz"),
+ Floats: []promql.FPoint{{T: 60 * 1000, F: 60}, {T: 120 * 1000, F: 60}},
+ },
+ promql.Series{
+ Metric: labels.FromStrings("__variant__", "1", "app", "foo", "foo", "bar"),
+ Floats: []promql.FPoint{{T: 60 * 1000, F: 60}, {T: 120 * 1000, F: 60}},
+ },
+ promql.Series{
+ Metric: labels.FromStrings("__variant__", "1", "app", "foo", "foo", "baz"),
+ Floats: []promql.FPoint{{T: 60 * 1000, F: 60}, {T: 120 * 1000, F: 60}},
+ },
+ },
+ },
+ {
+ `variants(sum by (app) (bytes_over_time({app="foo"}[1m])), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`,
+ time.Unix(60, 0), time.Unix(120, 0), time.Minute, 0, logproto.BACKWARD, 10,
+ [][]logproto.Series{
+ {
+ newSeries(testSize, identity, `{app="foo", foo="bar"}`),
+ newSeries(testSize, identity, `{app="foo", foo="baz"}`),
+ },
+ },
+ []SelectSampleParams{
+ {
+ &logproto.SampleQueryRequest{
+ Selector: `variants(sum by (app) (bytes_over_time({app="foo"}[1m])), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`,
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`variants(sum by (app) (bytes_over_time({app="foo"}[1m])), count_over_time({app="foo"}[1m])) of ({app="foo"}[1m])`),
+ },
+ Start: time.Unix(0, 0),
+ End: time.Unix(60, 0),
+ },
+ },
+ },
+ promql.Matrix{
+ promql.Series{
+ Metric: labels.FromStrings("__variant__", "0", "app", "foo"),
+ Floats: []promql.FPoint{{T: 60 * 1000, F: 120}, {T: 120 * 1000, F: 120}},
+ },
+ promql.Series{
+ Metric: labels.FromStrings("__variant__", "1", "app", "foo", "foo", "bar"),
+ Floats: []promql.FPoint{{T: 60 * 1000, F: 60}, {T: 120 * 1000, F: 60}},
+ },
+ promql.Series{
+ Metric: labels.FromStrings("__variant__", "1", "app", "foo", "foo", "baz"),
+ Floats: []promql.FPoint{{T: 60 * 1000, F: 60}, {T: 120 * 1000, F: 60}},
+ },
+ },
+ },
+ } {
+ t.Run(fmt.Sprintf("%s %s", test.qs, test.direction), func(t *testing.T) {
+ t.Parallel()
+
+ eng := NewEngine(
+ EngineOpts{
+ EnableMutiVariantQueries: true,
+ },
+ newQuerierRecorder(t, test.data, test.params),
+ NoLimits,
+ log.NewNopLogger(),
+ )
+
+ params, err := NewLiteralParams(
+ test.qs,
+ test.start,
+ test.end,
+ test.step,
+ test.interval,
+ test.direction,
+ test.limit,
+ nil,
+ nil,
+ )
+ require.NoError(t, err)
+ q := eng.Query(params)
+ res, err := q.Exec(user.InjectOrgID(context.Background(), "fake"))
+ if err != nil {
+ t.Fatal(err)
+ }
+ assert.Equal(t, test.expected, res.Data)
+ })
+ }
+}
+
type statsQuerier struct{}
func (statsQuerier) SelectLogs(ctx context.Context, _ SelectLogParams) (iter.EntryIterator, error) {
@@ -2674,6 +3030,11 @@ func TestUnexpectedEmptyResults(t *testing.T) {
return EmptyEvaluator[SampleVector]{value: nil}, nil
},
),
+ VariantsEvaluatorFunc(
+ func(context.Context, syntax.VariantsExpr, Params) (StepEvaluator, error) {
+ return EmptyEvaluator[SampleVector]{value: nil}, nil
+ },
+ ),
}
eng := NewEngine(EngineOpts{}, nil, NoLimits, log.NewNopLogger())
@@ -2687,10 +3048,25 @@ func TestUnexpectedEmptyResults(t *testing.T) {
}
type mockEvaluatorFactory struct {
- SampleEvaluatorFactory
+ sampleEvalFunc SampleEvaluatorFunc
+ variantEvalFunc VariantsEvaluatorFunc
+}
+
+func (m *mockEvaluatorFactory) NewStepEvaluator(ctx context.Context, nextEvaluatorFactory SampleEvaluatorFactory, expr syntax.SampleExpr, p Params) (StepEvaluator, error) {
+ if m.sampleEvalFunc != nil {
+ return m.sampleEvalFunc(ctx, nextEvaluatorFactory, expr, p)
+ }
+ return nil, errors.New("unimplemented mock SampleEvaluatorFactory")
}
-func (*mockEvaluatorFactory) NewIterator(context.Context, syntax.LogSelectorExpr, Params) (iter.EntryIterator, error) {
+func (m *mockEvaluatorFactory) NewVariantsStepEvaluator(ctx context.Context, expr syntax.VariantsExpr, p Params) (StepEvaluator, error) {
+ if m.variantEvalFunc != nil {
+ return m.variantEvalFunc(ctx, expr, p)
+ }
+ return nil, errors.New("unimplemented mock VariantEvaluatorFactory")
+}
+
+func (m *mockEvaluatorFactory) NewIterator(context.Context, syntax.LogSelectorExpr, Params) (iter.EntryIterator, error) {
return nil, errors.New("unimplemented mock EntryEvaluatorFactory")
}
@@ -2747,13 +3123,54 @@ func newQuerierRecorder(t *testing.T, data interface{}, params interface{}) *que
if seriesIn, ok := data.([][]logproto.Series); ok {
if paramsIn, ok2 := params.([]SelectSampleParams); ok2 {
for i, p := range paramsIn {
- p.Plan = &plan.QueryPlan{
- AST: syntax.MustParseExpr(p.Selector),
+ expr, ok3 := syntax.MustParseExpr(p.Selector).(syntax.VariantsExpr)
+ if ok3 {
+ if p.Plan == nil {
+ p.Plan = &plan.QueryPlan{
+ AST: expr,
+ }
+ }
+
+ curSeries := seriesIn[i]
+ variants := expr.Variants()
+ newSeries := make([]logproto.Series, len(curSeries)*len(variants))
+
+ for vi := range variants {
+ for si, s := range curSeries {
+ lbls, err := promql_parser.ParseMetric(s.Labels)
+ if err != nil {
+ return nil
+ }
+
+ // Add variant label
+ lbls = append(
+ lbls,
+ labels.Label{Name: "__variant__", Value: fmt.Sprintf("%d", vi)},
+ )
+
+ // Copy series with new labels
+ idx := vi*len(curSeries) + si
+ newSeries[idx] = logproto.Series{
+ Labels: lbls.String(),
+ Samples: s.Samples,
+ }
+ }
+ }
+ series[paramsID(p)] = newSeries
+ } else {
+ for i, p := range paramsIn {
+ if p.Plan == nil {
+ p.Plan = &plan.QueryPlan{
+ AST: syntax.MustParseExpr(p.Selector),
+ }
+ }
+ series[paramsID(p)] = seriesIn[i]
+ }
}
- series[paramsID(p)] = seriesIn[i]
}
}
}
+
return &querierRecorder{
streams: streams,
series: series,
diff --git a/pkg/logql/evaluator.go b/pkg/logql/evaluator.go
index 31b0627d41b0d..3cb1c3133bbdf 100644
--- a/pkg/logql/evaluator.go
+++ b/pkg/logql/evaluator.go
@@ -6,6 +6,7 @@ import (
"fmt"
"math"
"sort"
+ "strconv"
"time"
"github.com/pkg/errors"
@@ -13,6 +14,8 @@ import (
"github.com/prometheus/prometheus/promql"
"golang.org/x/sync/errgroup"
+ "github.com/prometheus/prometheus/promql/parser"
+
"github.com/grafana/loki/v3/pkg/iter"
"github.com/grafana/loki/v3/pkg/logproto"
"github.com/grafana/loki/v3/pkg/logql/syntax"
@@ -258,6 +261,7 @@ func Sortable(q Params) (bool, error) {
type EvaluatorFactory interface {
SampleEvaluatorFactory
EntryEvaluatorFactory
+ VariantEvaluatorFactory
}
type SampleEvaluatorFactory interface {
@@ -1338,3 +1342,292 @@ func absentLabels(expr syntax.SampleExpr) (labels.Labels, error) {
}
return m, nil
}
+
+type VariantEvaluatorFactory interface {
+ NewVariantsStepEvaluator(
+ ctx context.Context,
+ expr syntax.VariantsExpr,
+ p Params,
+ ) (StepEvaluator, error)
+}
+
+type VariantsEvaluatorFunc func(ctx context.Context, expr syntax.VariantsExpr, p Params) (StepEvaluator, error)
+
+func (s VariantsEvaluatorFunc) NewVariantsStepEvaluator(
+ ctx context.Context,
+ expr syntax.VariantsExpr,
+ p Params,
+) (StepEvaluator, error) {
+ return s(ctx, expr, p)
+}
+
+func (ev *DefaultEvaluator) NewVariantsStepEvaluator(
+ ctx context.Context,
+ expr syntax.VariantsExpr,
+ q Params,
+) (StepEvaluator, error) {
+ switch e := expr.(type) {
+ case *syntax.MultiVariantExpr:
+ logRange := e.LogRange()
+
+ // We don't have the benefit of sending the vector expression to the source for reducing labels
+ // Since multiple samples are allowed, and they may not share the same labels to reduce by
+ it, err := ev.querier.SelectSamples(ctx, SelectSampleParams{
+ &logproto.SampleQueryRequest{
+ // extend startTs backwards by step
+ Start: q.Start().Add(-logRange.Interval).Add(-logRange.Offset),
+ // add leap nanosecond to endTs to include lines exactly at endTs. range iterators work on start exclusive, end inclusive ranges
+ End: q.End().Add(-logRange.Offset).Add(time.Nanosecond),
+ Selector: expr.String(),
+ Shards: q.Shards(),
+ Plan: &plan.QueryPlan{
+ AST: expr,
+ },
+ StoreChunks: q.GetStoreChunks(),
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return ev.newVariantsEvaluator(ctx, iter.NewPeekingSampleIterator(it), e, q)
+ default:
+ return nil, EvaluatorUnsupportedType(e, ev)
+ }
+}
+
+func (ev *DefaultEvaluator) newVariantsEvaluator(
+ _ context.Context,
+ it iter.PeekingSampleIterator,
+ expr *syntax.MultiVariantExpr,
+ q Params,
+) (StepEvaluator, error) {
+ // an iterator that can buffer samples across all variants for each step
+ bufferedIterator := &bufferedVariantsIterator{
+ iter: it,
+ }
+
+ variantEvaluators := []StepEvaluator{}
+ // TODO(twhitney): using the variant index feels fragile, would prefer if variants had to be named in the query.
+ idx := 0
+ for _, variant := range expr.Variants() {
+ extractors, err := variant.Extractors()
+ if err != nil {
+ return nil, err
+ }
+
+ for range extractors {
+ // wraps the buffered iterator to only return samples for the current variant (determined by the index)
+ variantIterator := &bufferedVariantsIteratorWrapper{
+ bufferedVariantsIterator: bufferedIterator,
+ index: idx,
+ }
+
+ var variantEvaluator StepEvaluator
+ var err error
+ switch e := variant.(type) {
+ case *syntax.VectorAggregationExpr:
+ if rangExpr, ok := e.Left.(*syntax.RangeAggregationExpr); ok {
+ rangeEvaluator, err := newRangeAggEvaluator(iter.NewPeekingSampleIterator(variantIterator), rangExpr, q, rangExpr.Left.Offset)
+ if err != nil {
+ return nil, err
+ }
+
+ e.Grouping.Groups = append(e.Grouping.Groups, "__variant__")
+
+ sort.Strings(e.Grouping.Groups)
+ variantEvaluator = &VectorAggEvaluator{
+ nextEvaluator: rangeEvaluator,
+ expr: e,
+ buf: make([]byte, 0, 1024),
+ lb: labels.NewBuilder(nil),
+ }
+ } else {
+ return nil, fmt.Errorf("expected range aggregation expression but got %T", e.Left)
+ }
+ case *syntax.RangeAggregationExpr:
+ variantEvaluator, err = newRangeAggEvaluator(iter.NewPeekingSampleIterator(variantIterator), e, q, e.Left.Offset)
+ }
+
+ if err != nil {
+ return nil, err
+ }
+
+ variantEvaluators = append(variantEvaluators, variantEvaluator)
+ idx++
+ }
+ }
+
+ return &VariantsEvaluator{
+ current: q.Start().UnixNano() - q.Step().Nanoseconds(),
+ variantEvaluators: variantEvaluators,
+ }, nil
+}
+
+type bufferedVariantsIterator struct {
+ iter iter.PeekingSampleIterator
+ buffer map[int][]sampleWithLabelsAndStreamHash
+ current sampleWithLabelsAndStreamHash
+ currentLabels string
+ err error
+}
+
+type sampleWithLabelsAndStreamHash struct {
+ sample logproto.Sample
+ labels string
+ streamHash uint64
+}
+
+// TODO(twhitney): does this need its own test?
+func (it *bufferedVariantsIterator) Next(index int) bool {
+ // Check if there are samples in the buffer for the requested index
+ if samples, ok := it.buffer[index]; ok && len(samples) > 0 {
+ it.current = samples[0]
+ it.buffer[index] = samples[1:]
+ return true
+ }
+
+ // If not, keep popping samples from the underlying iterator
+ for it.iter.Next() {
+ sample := it.iter.At()
+ labels := it.iter.Labels()
+ variantIndex := it.getVariantIndex(labels)
+ if variantIndex == -1 {
+ it.err = fmt.Errorf("variant label not found in %s", labels)
+ return false
+ }
+
+ currentSample := sampleWithLabelsAndStreamHash{
+ sample: sample,
+ labels: labels,
+ streamHash: it.iter.StreamHash(),
+ }
+
+ if variantIndex == index {
+ it.current = currentSample
+ return true
+ }
+
+ // Store the sample in the buffer for its variant
+ it.storeSample(variantIndex, currentSample)
+ }
+
+ return false
+}
+
+// getVariantIndex determines the variant index for a given sample based on the "__variant__" label
+func (it *bufferedVariantsIterator) getVariantIndex(lbls string) int {
+ metric, err := parser.ParseMetric(lbls)
+ if err != nil {
+ it.err = err
+ return -1
+ }
+
+ for _, lbl := range metric {
+ // TODO: make constant
+ if lbl.Name == "__variant__" {
+ val, err := strconv.Atoi(lbl.Value)
+ if err != nil {
+ it.err = err
+ return -1
+ }
+
+ return val
+ }
+ }
+
+ it.err = fmt.Errorf("variant label not found in %s", lbls)
+ return -1
+}
+
+func (it *bufferedVariantsIterator) storeSample(index int, sample sampleWithLabelsAndStreamHash) {
+ if it.buffer == nil {
+ it.buffer = make(map[int][]sampleWithLabelsAndStreamHash)
+ }
+ it.buffer[index] = append(it.buffer[index], sample)
+}
+
+func (it *bufferedVariantsIterator) At() logproto.Sample {
+ return it.current.sample
+}
+
+func (it *bufferedVariantsIterator) Labels() string {
+ return it.current.labels
+}
+
+func (it *bufferedVariantsIterator) StreamHash() uint64 {
+ return it.current.streamHash
+}
+
+func (it *bufferedVariantsIterator) Err() error {
+ return it.err
+}
+
+func (it *bufferedVariantsIterator) Close() error {
+ return it.iter.Close()
+}
+
+type bufferedVariantsIteratorWrapper struct {
+ *bufferedVariantsIterator
+ index int
+}
+
+// TODO(twhitney): does this need its own test?
+func (it *bufferedVariantsIteratorWrapper) Next() bool {
+ return it.bufferedVariantsIterator.Next(it.index)
+}
+
+// VariantsEvaluator is responsible for making sure the window is loaded from all
+// evaluators for all variants
+// TODO(twhitney): does this need its own test?
+type VariantsEvaluator struct {
+ current int64
+
+ variantEvaluators []StepEvaluator
+ currentSamples SampleVector
+ err error
+}
+
+// Reports any error
+func (it *VariantsEvaluator) Error() error {
+ return it.err
+}
+
+// Explain returns a print of the step evaluation tree
+func (it *VariantsEvaluator) Explain(_ Node) {
+ panic("not implemented") // TODO: Implement
+}
+
+func (it *VariantsEvaluator) Next() (bool, int64, StepResult) {
+ samples := it.currentSamples[:0]
+ hasNext := false
+
+ for _, variantEval := range it.variantEvaluators {
+ if ok, ts, result := variantEval.Next(); ok {
+ hasNext = true
+ samples = append(samples, result.SampleVector()...)
+ if ts > it.current {
+ it.current = ts
+ }
+ }
+ }
+
+ if !hasNext {
+ return false, 0, SampleVector{}
+ }
+
+ it.currentSamples = samples
+ return true, it.current, it.currentSamples
+}
+
+func (it *VariantsEvaluator) Close() error {
+ var errs []error
+ for _, variantIter := range it.variantEvaluators {
+ if err := variantIter.Close(); err != nil {
+ errs = append(errs, err)
+ }
+ }
+ if len(errs) > 0 {
+ return fmt.Errorf("multiple errors on close: %v", errs)
+ }
+ return nil
+}
diff --git a/pkg/logql/log/metrics_extraction.go b/pkg/logql/log/metrics_extraction.go
index 73d8bf3d1e151..34fd0ab58132c 100644
--- a/pkg/logql/log/metrics_extraction.go
+++ b/pkg/logql/log/metrics_extraction.go
@@ -321,6 +321,25 @@ func convertBytes(v string) (float64, error) {
return float64(b), nil
}
+type variantsSampleExtractorWrapper struct {
+ SampleExtractor
+ index int
+}
+
+func NewVariantsSampleExtractorWrapper(
+ index int,
+ extractor SampleExtractor,
+) SampleExtractor {
+ return &variantsSampleExtractorWrapper{
+ SampleExtractor: extractor,
+ index: index,
+ }
+}
+
+func (v *variantsSampleExtractorWrapper) ForStream(labels labels.Labels) StreamSampleExtractor {
+ return NewVariantsStreamSampleExtractorWrapper(v.index, v.SampleExtractor.ForStream(labels))
+}
+
type variantsStreamSampleExtractorWrapper struct {
StreamSampleExtractor
index int
diff --git a/pkg/logql/rangemapper.go b/pkg/logql/rangemapper.go
index f33d50b0c5d92..b64e71264ca6d 100644
--- a/pkg/logql/rangemapper.go
+++ b/pkg/logql/rangemapper.go
@@ -179,6 +179,11 @@ func (m RangeMapper) Map(expr syntax.SampleExpr, vectorAggrPushdown *syntax.Vect
return e, nil
case *syntax.VectorExpr:
return e, nil
+ case *syntax.MultiVariantExpr:
+ // TODO(twhitney): we should be able to handle multi-variant expressions but creating
+ // multiple expression with of() statements that match the sub-range and concatenating
+ // the result
+ return e, nil
default:
// ConcatSampleExpr and DownstreamSampleExpr are not supported input expression types
return nil, errors.Errorf("unexpected expr type (%T) for ASTMapper type (%T) ", expr, m)
diff --git a/pkg/logql/shardmapper.go b/pkg/logql/shardmapper.go
index 2ecf034fb957b..3f36a66374bbc 100644
--- a/pkg/logql/shardmapper.go
+++ b/pkg/logql/shardmapper.go
@@ -90,6 +90,9 @@ func (m ShardMapper) Map(expr syntax.Expr, r *downstreamRecorder, topLevel bool)
return e, 0, nil
case *syntax.VectorExpr:
return e, 0, nil
+ case *syntax.MultiVariantExpr:
+ // TODO(twhitney): this should be possible to support but hasn't been implemented yet
+ return e, 0, nil
case *syntax.MatchersExpr, *syntax.PipelineExpr:
return m.mapLogSelectorExpr(e.(syntax.LogSelectorExpr), r)
case *syntax.VectorAggregationExpr:
diff --git a/pkg/logql/syntax/ast.go b/pkg/logql/syntax/ast.go
index 311ab9443509f..3fc1c39eda25d 100644
--- a/pkg/logql/syntax/ast.go
+++ b/pkg/logql/syntax/ast.go
@@ -2598,13 +2598,19 @@ func (m *MultiVariantExpr) Selector() (LogSelectorExpr, error) {
func (m *MultiVariantExpr) Extractors() ([]log.SampleExtractor, error) {
extractors := make([]log.SampleExtractor, 0, len(m.variants))
+ // TODO(twhitney): using the variant index feels fragile, would prefer if variants had to be named in the query.
+ idx := 0
+
for _, v := range m.variants {
- e, err := v.Extractors()
+ es, err := v.Extractors()
if err != nil {
return nil, err
}
- extractors = append(extractors, e...)
+ for _, e := range es {
+ extractors = append(extractors, log.NewVariantsSampleExtractorWrapper(idx, e))
+ idx++
+ }
}
return extractors, nil
diff --git a/pkg/querier/queryrange/roundtrip.go b/pkg/querier/queryrange/roundtrip.go
index 28dfdb1ea3c4a..233f8d30052b5 100644
--- a/pkg/querier/queryrange/roundtrip.go
+++ b/pkg/querier/queryrange/roundtrip.go
@@ -20,7 +20,6 @@ import (
"github.com/grafana/loki/v3/pkg/logql"
logqllog "github.com/grafana/loki/v3/pkg/logql/log"
"github.com/grafana/loki/v3/pkg/logql/syntax"
- "github.com/grafana/loki/v3/pkg/logqlmodel"
"github.com/grafana/loki/v3/pkg/logqlmodel/stats"
"github.com/grafana/loki/v3/pkg/querier/queryrange/queryrangebase"
base "github.com/grafana/loki/v3/pkg/querier/queryrange/queryrangebase"
@@ -259,26 +258,6 @@ func NewMiddleware(
return nil, nil, err
}
- variantsTripperware, err := NewVariantsTripperware(
- cfg,
- engineOpts,
- log,
- limits,
- schema,
- codec,
- iqo,
- resultsCache,
- cacheGenNumLoader,
- retentionEnabled,
- PrometheusExtractor{},
- metrics,
- indexStatsTripperware,
- metricsNamespace,
- )
- if err != nil {
- return nil, nil, err
- }
-
return base.MiddlewareFunc(func(next base.Handler) base.Handler {
var (
metricRT = metricsTripperware.Wrap(next)
@@ -291,7 +270,6 @@ func NewMiddleware(
seriesVolumeRT = seriesVolumeTripperware.Wrap(next)
detectedFieldsRT = detectedFieldsTripperware.Wrap(next)
detectedLabelsRT = detectedLabelsTripperware.Wrap(next)
- variantsRT = variantsTripperware.Wrap(next)
)
return newRoundTripper(
@@ -307,7 +285,6 @@ func NewMiddleware(
seriesVolumeRT,
detectedFieldsRT,
detectedLabelsRT,
- variantsRT,
limits,
)
}), StopperWrapper{resultsCache, statsCache, volumeCache}, nil
@@ -363,7 +340,7 @@ func NewDetectedLabelsCardinalityFilter(rt queryrangebase.Handler) queryrangebas
type roundTripper struct {
logger log.Logger
- next, limited, log, metric, series, labels, instantMetric, indexStats, seriesVolume, detectedFields, detectedLabels, variants base.Handler
+ next, limited, log, metric, series, labels, instantMetric, indexStats, seriesVolume, detectedFields, detectedLabels base.Handler
limits Limits
}
@@ -371,7 +348,7 @@ type roundTripper struct {
// newRoundTripper creates a new queryrange roundtripper
func newRoundTripper(
logger log.Logger,
- next, limited, log, metric, series, labels, instantMetric, indexStats, seriesVolume, detectedFields, detectedLabels, variants base.Handler,
+ next, limited, log, metric, series, labels, instantMetric, indexStats, seriesVolume, detectedFields, detectedLabels base.Handler,
limits Limits,
) roundTripper {
return roundTripper{
@@ -387,7 +364,6 @@ func newRoundTripper(
seriesVolume: seriesVolume,
detectedFields: detectedFields,
detectedLabels: detectedLabels,
- variants: variants,
next: next,
}
}
@@ -450,7 +426,7 @@ func (r roundTripper) Do(ctx context.Context, req base.Request) (base.Response,
}
}
- return r.variants.Do(ctx, req)
+ return r.metric.Do(ctx, req)
case syntax.SampleExpr:
// The error will be handled later.
groups, err := e.MatcherGroups()
@@ -1316,34 +1292,3 @@ func NewDetectedFieldsTripperware(
return NewDetectedFieldsHandler(limitedHandler, logHandler, limits)
}), nil
}
-
-// NewVariantsTripperware creates a new frontend tripperware responsible for handling queries with multiple variants for a single
-// selector.
-func NewVariantsTripperware(
- _ Config,
- _ logql.EngineOpts,
- _ log.Logger,
- _ Limits,
- _ config.SchemaConfig,
- _ base.Merger,
- _ util.IngesterQueryOptions,
- _ cache.Cache,
- _ base.CacheGenNumberLoader,
- _ bool,
- _ base.Extractor,
- _ *Metrics,
- _ base.Middleware,
- _ string,
-) (base.Middleware, error) {
- return base.MiddlewareFunc(func(next base.Handler) base.Handler {
- return base.HandlerFunc(
- func(ctx context.Context, r base.Request) (base.Response, error) {
- if _, ok := r.(*LokiRequest); !ok {
- return next.Do(ctx, r)
- }
-
- return nil, logqlmodel.ErrVariantsDisabled
- },
- )
- }), nil
-}
diff --git a/pkg/querier/queryrange/roundtrip_test.go b/pkg/querier/queryrange/roundtrip_test.go
index 2604dd72e8484..bafe57096de98 100644
--- a/pkg/querier/queryrange/roundtrip_test.go
+++ b/pkg/querier/queryrange/roundtrip_test.go
@@ -1005,7 +1005,6 @@ func TestPostQueries(t *testing.T) {
handler,
handler,
handler,
- handler,
fakeLimits{},
).Do(ctx, lreq)
require.NoError(t, err) | chore | implement multi-variant queries (#16426) |
e9419747c93d9dd79848514c7329fa99116f92e3 | 2022-10-05 12:14:44 | Periklis Tsirakidis | Fix internal server bootstrap for query frontend (#7328)
Adds a fix for setting up the internal server module | false | diff --git a/CHANGELOG.md b/CHANGELOG.md
index cf8f583563e3d..81cf11663d9bb 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -25,6 +25,7 @@
* [7270](https://github.com/grafana/loki/pull/7270) **wilfriedroset**: Add support for `username` to redis cache configuration.
##### Fixes
+* [7238](https://github.com/grafana/loki/pull/7328) **periklis**: Fix internal server bootstrap for query frontend
* [7288](https://github.com/grafana/loki/pull/7288) **ssncferreira**: Fix query mapping in AST mapper `rangemapper` to support the new `VectorExpr` expression.
* [7040](https://github.com/grafana/loki/pull/7040) **bakunowski**: Remove duplicated `loki_boltdb_shipper` prefix from `tables_upload_operation_total` metric.
* [6937](https://github.com/grafana/loki/pull/6937) **ssncferreira**: Fix topk and bottomk expressions with parameter <= 0.
diff --git a/pkg/loki/loki.go b/pkg/loki/loki.go
index dc3173e0dc75c..f3917037d52bd 100644
--- a/pkg/loki/loki.go
+++ b/pkg/loki/loki.go
@@ -66,7 +66,7 @@ type Config struct {
BallastBytes int `yaml:"ballast_bytes"`
// TODO(dannyk): Remove these config options before next release; they don't need to be configurable.
- // These are only here to allow us to test the new functionality.
+ // These are only here to allow us to test the new functionality.
UseBufferedLogger bool `yaml:"use_buffered_logger"`
UseSyncLogger bool `yaml:"use_sync_logger"`
@@ -570,7 +570,7 @@ func (t *Loki) setupModuleManager() error {
}
if t.Cfg.InternalServer.Enable {
- depsToUpdate := []string{Distributor, Ingester, Querier, QueryFrontend, QueryScheduler, Ruler, TableManager, Compactor, IndexGateway}
+ depsToUpdate := []string{Distributor, Ingester, Querier, QueryFrontendTripperware, QueryScheduler, Ruler, TableManager, Compactor, IndexGateway}
for _, dep := range depsToUpdate {
var idx int
@@ -581,8 +581,8 @@ func (t *Loki) setupModuleManager() error {
}
}
- lhs := deps[dep][0:idx]
- rhs := deps[dep][idx+1:]
+ lhs := deps[dep][0 : idx+1]
+ rhs := deps[dep][idx+2:]
deps[dep] = append(lhs, InternalServer)
deps[dep] = append(deps[dep], rhs...)
diff --git a/pkg/loki/loki_test.go b/pkg/loki/loki_test.go
index bfec1595a7b15..b85abf84b722a 100644
--- a/pkg/loki/loki_test.go
+++ b/pkg/loki/loki_test.go
@@ -99,24 +99,55 @@ func TestLoki_isModuleEnabled(t1 *testing.T) {
}
func TestLoki_AppendOptionalInternalServer(t *testing.T) {
- tests := []string{Distributor, Ingester, Querier, QueryFrontend, QueryScheduler, Ruler, TableManager, Compactor, IndexGateway}
+ fake := &Loki{
+ Cfg: Config{
+ Target: flagext.StringSliceCSV{All},
+ Server: server.Config{
+ HTTPListenAddress: "3100",
+ },
+ },
+ }
+ err := fake.setupModuleManager()
+ assert.NoError(t, err)
+
+ var tests []string
+ for target, deps := range fake.deps {
+ switch target {
+ // Blacklist these targets for using the internal server
+ case IndexGatewayRing, MemberlistKV, OverridesExporter, Ring, Embededcache, Server:
+ continue
+ }
+
+ for _, dep := range deps {
+ if dep == Server {
+ tests = append(tests, target)
+ break
+ }
+ }
+ }
+
+ assert.NotEmpty(t, tests, tests)
+
for _, tt := range tests {
t.Run(tt, func(t *testing.T) {
l := &Loki{
Cfg: Config{
Target: flagext.StringSliceCSV{tt},
+ Server: server.Config{
+ HTTPListenAddress: "3100",
+ },
InternalServer: internalserver.Config{
Config: server.Config{
- HTTPListenAddress: "3002",
+ HTTPListenAddress: "3101",
},
Enable: true,
},
},
}
-
err := l.setupModuleManager()
assert.NoError(t, err)
assert.Contains(t, l.deps[tt], InternalServer)
+ assert.Contains(t, l.deps[tt], Server)
})
}
} | unknown | Fix internal server bootstrap for query frontend (#7328)
Adds a fix for setting up the internal server module |
ae955ed30d841675dbb9e30327b84728050e724a | 2024-09-24 17:37:14 | Cyril Tovena | fix(pattern): Fixes latency metric namespace for tee to pattern (#14241) | false | diff --git a/pkg/pattern/tee_service.go b/pkg/pattern/tee_service.go
index c38ef95dd90ff..c279474cce42e 100644
--- a/pkg/pattern/tee_service.go
+++ b/pkg/pattern/tee_service.go
@@ -21,7 +21,6 @@ import (
"github.com/grafana/loki/v3/pkg/loghttp/push"
"github.com/grafana/loki/v3/pkg/logproto"
"github.com/grafana/loki/v3/pkg/logql/syntax"
- "github.com/grafana/loki/v3/pkg/util/constants"
ring_client "github.com/grafana/dskit/ring/client"
)
@@ -77,10 +76,9 @@ func NewTeeService(
sendDuration: instrument.NewHistogramCollector(
promauto.With(registerer).NewHistogramVec(
prometheus.HistogramOpts{
- Namespace: constants.Loki,
- Name: "pattern_ingester_tee_send_duration_seconds",
- Help: "Time spent sending batches from the tee to the pattern ingester",
- Buckets: prometheus.DefBuckets,
+ Name: "pattern_ingester_tee_send_duration_seconds",
+ Help: "Time spent sending batches from the tee to the pattern ingester",
+ Buckets: prometheus.DefBuckets,
}, instrument.HistogramCollectorBuckets,
),
), | fix | Fixes latency metric namespace for tee to pattern (#14241) |
3b0fa184c542969c6c355fd65e33341f62172de3 | 2024-04-16 01:45:22 | J Stickler | docs: hide the sizing calculator until updated (#12598) | false | diff --git a/docs/sources/setup/_index.md b/docs/sources/setup/_index.md
index e1e1caef768ff..2464feb75f350 100644
--- a/docs/sources/setup/_index.md
+++ b/docs/sources/setup/_index.md
@@ -7,7 +7,6 @@ weight: 300
# Setup Loki
-- Estimate the initial [size]({{< relref "./size" >}}) for your Loki cluster.
- [Install]({{< relref "./install" >}}) Loki.
- [Migrate]({{< relref "./migrate" >}}) from one Loki implementation to another.
- [Upgrade]({{< relref "./upgrade" >}}) from one Loki version to a newer version.
diff --git a/docs/sources/setup/install/_index.md b/docs/sources/setup/install/_index.md
index 11521f1158ed5..2b56cba78cb69 100644
--- a/docs/sources/setup/install/_index.md
+++ b/docs/sources/setup/install/_index.md
@@ -17,10 +17,6 @@ There are several methods of installing Loki and Promtail:
- [Install and run locally]({{< relref "./local" >}})
- [Install from source]({{< relref "./install-from-source" >}})
-The [Sizing Tool]({{< relref "../size" >}}) can be used to determine the proper cluster sizing
-given an expected ingestion rate and query performance. It targets the Helm
-installation on Kubernetes.
-
## General process
In order to run Loki, you must:
diff --git a/docs/sources/setup/size/_index.md b/docs/sources/setup/size/_index.md
index e2215c7e80f72..74dcb8e504964 100644
--- a/docs/sources/setup/size/_index.md
+++ b/docs/sources/setup/size/_index.md
@@ -6,7 +6,7 @@ aliases:
- ../installation/sizing/
- ../installation/helm/generate
weight: 100
-keywords: []
+draft: true
---
<link rel="stylesheet" href="../../query/analyzer/style.css"> | docs | hide the sizing calculator until updated (#12598) |
8cd4694adbc7b3236e8908e14f51283e027a078d | 2022-05-02 17:31:57 | Sandeep Sukhani | allow more time for boltdb-shipper index syncs to finish (#6071)
* retains index files with ingester for 20 mins instead of 12 mins after uploading
* sets GetChunkIDs lookback to 2h41m instead of 2h28m | false | diff --git a/pkg/loki/modules.go b/pkg/loki/modules.go
index 6e648da86b4a0..d3660d1631b4c 100644
--- a/pkg/loki/modules.go
+++ b/pkg/loki/modules.go
@@ -412,13 +412,13 @@ func (t *Loki) initStore() (_ services.Service, err error) {
// have query gaps on chunks flushed after an index entry is cached by keeping them retained in the ingester
// and queried as part of live data until the cache TTL expires on the index entry.
t.Cfg.Ingester.RetainPeriod = t.Cfg.StorageConfig.IndexCacheValidity + 1*time.Minute
- t.Cfg.StorageConfig.BoltDBShipperConfig.IngesterDBRetainPeriod = boltdbShipperQuerierIndexUpdateDelay(t.Cfg) + 2*time.Minute
+ t.Cfg.StorageConfig.BoltDBShipperConfig.IngesterDBRetainPeriod = boltdbShipperQuerierIndexUpdateDelay(t.Cfg)
case t.Cfg.isModuleEnabled(Querier), t.Cfg.isModuleEnabled(Ruler), t.Cfg.isModuleEnabled(Read), t.isModuleActive(IndexGateway):
// We do not want query to do any updates to index
t.Cfg.StorageConfig.BoltDBShipperConfig.Mode = shipper.ModeReadOnly
default:
t.Cfg.StorageConfig.BoltDBShipperConfig.Mode = shipper.ModeReadWrite
- t.Cfg.StorageConfig.BoltDBShipperConfig.IngesterDBRetainPeriod = boltdbShipperQuerierIndexUpdateDelay(t.Cfg) + 2*time.Minute
+ t.Cfg.StorageConfig.BoltDBShipperConfig.IngesterDBRetainPeriod = boltdbShipperQuerierIndexUpdateDelay(t.Cfg)
}
}
@@ -955,10 +955,12 @@ func calculateAsyncStoreQueryIngestersWithin(queryIngestersWithinConfig, minDura
}
// boltdbShipperQuerierIndexUpdateDelay returns duration it could take for queriers to serve the index since it was uploaded.
+// It considers upto 3 sync attempts for the indexgateway/queries to be successful in syncing the files to factor in worst case scenarios like
+// failures in sync, low download throughput, various kinds of caches in between etc. which can delay the sync operation from getting all the updates from the storage.
// It also considers index cache validity because a querier could have cached index just before it was going to resync which means
// it would keep serving index until the cache entries expire.
func boltdbShipperQuerierIndexUpdateDelay(cfg Config) time.Duration {
- return cfg.StorageConfig.IndexCacheValidity + cfg.StorageConfig.BoltDBShipperConfig.ResyncInterval
+ return cfg.StorageConfig.IndexCacheValidity + cfg.StorageConfig.BoltDBShipperConfig.ResyncInterval*3
}
// boltdbShipperIngesterIndexUploadDelay returns duration it could take for an index file containing id of a chunk to be uploaded to the shared store since it got flushed.
@@ -967,9 +969,9 @@ func boltdbShipperIngesterIndexUploadDelay() time.Duration {
}
// boltdbShipperMinIngesterQueryStoreDuration returns minimum duration(with some buffer) ingesters should query their stores to
-// avoid missing any logs or chunk ids due to async nature of BoltDB Shipper.
+// avoid queriers from missing any logs or chunk ids due to async nature of BoltDB Shipper.
func boltdbShipperMinIngesterQueryStoreDuration(cfg Config) time.Duration {
- return cfg.Ingester.MaxChunkAge + boltdbShipperIngesterIndexUploadDelay() + boltdbShipperQuerierIndexUpdateDelay(cfg) + 2*time.Minute
+ return cfg.Ingester.MaxChunkAge + boltdbShipperIngesterIndexUploadDelay() + boltdbShipperQuerierIndexUpdateDelay(cfg) + 5*time.Minute
}
// NewServerService constructs service from Server component. | unknown | allow more time for boltdb-shipper index syncs to finish (#6071)
* retains index files with ingester for 20 mins instead of 12 mins after uploading
* sets GetChunkIDs lookback to 2h41m instead of 2h28m |
641c9ee48a5cac7c57f081a2b73c16d5a8c1953b | 2023-09-18 23:51:48 | Travis Patterson | Don't allow unbounded parallelism when downloading indices (#10634)
Index Gateways can stall when certain queries try to download tons of
indices because. `ErrGroup` needs an explicit limit set on it. This PR
sets the limit provisionally until a more configurable solution can be
introduced.
---------
Co-authored-by: Dylan Guedes <dylan.guedes@grafana.com> | false | diff --git a/pkg/storage/stores/indexshipper/downloads/index_set.go b/pkg/storage/stores/indexshipper/downloads/index_set.go
index 9f1d6e9efdb77..6da3774c1c710 100644
--- a/pkg/storage/stores/indexshipper/downloads/index_set.go
+++ b/pkg/storage/stores/indexshipper/downloads/index_set.go
@@ -202,6 +202,7 @@ func (t *indexSet) ForEachConcurrent(ctx context.Context, callback index.ForEach
defer t.indexMtx.rUnlock()
g, ctx := errgroup.WithContext(ctx)
+ g.SetLimit(200)
logger := util_log.WithContext(ctx, t.logger)
level.Debug(logger).Log("index-files-count", len(t.index)) | unknown | Don't allow unbounded parallelism when downloading indices (#10634)
Index Gateways can stall when certain queries try to download tons of
indices because. `ErrGroup` needs an explicit limit set on it. This PR
sets the limit provisionally until a more configurable solution can be
introduced.
---------
Co-authored-by: Dylan Guedes <dylan.guedes@grafana.com> |
f6a3300f872f8dfdebc131c73b43bbd8d49d0f03 | 2023-03-29 13:35:13 | Grot (@grafanabot) | [CI/CD] Update yaml file `./production/helm/loki/Chart.yaml` (+1 other) (#8923)
**Here is a summary of the updates contained in this PR:**
***
Update attribute `$.appVersion` in yaml file
`./production/helm/loki/Chart.yaml` to the following value: `2.7.5`
***
Bump version of Helm Chart
Add changelog entry to `./production/helm/loki/CHANGELOG.md`
Re-generate docs
Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com> | false | diff --git a/production/helm/loki/CHANGELOG.md b/production/helm/loki/CHANGELOG.md
index 718a8891eb597..101407ade4970 100644
--- a/production/helm/loki/CHANGELOG.md
+++ b/production/helm/loki/CHANGELOG.md
@@ -13,6 +13,11 @@ Entries should include a reference to the pull request that introduced the chang
[//]: # (<AUTOMATED_UPDATES_LOCATOR> : do not remove this line. This locator is used by the CI pipeline to automatically create a changelog entry for each new Loki release. Add other chart versions and respective changelog entries bellow this line.)
+## 4.9.0
+
+- [CHANGE] Changed version of Loki to 2.7.5
+
+
- [BUGFIX] Fix role/PSP mapping
## 4.8.0
diff --git a/production/helm/loki/Chart.yaml b/production/helm/loki/Chart.yaml
index 2a3e6227f108a..3c1f036378af9 100644
--- a/production/helm/loki/Chart.yaml
+++ b/production/helm/loki/Chart.yaml
@@ -2,8 +2,8 @@ apiVersion: v2
name: loki
description: Helm chart for Grafana Loki in simple, scalable mode
type: application
-appVersion: 2.7.3
-version: 4.8.0
+appVersion: 2.7.5
+version: 4.9.0
home: https://grafana.github.io/helm-charts
sources:
- https://github.com/grafana/loki
diff --git a/production/helm/loki/README.md b/production/helm/loki/README.md
index 335a0f3ba5c10..f0be681aa56dc 100644
--- a/production/helm/loki/README.md
+++ b/production/helm/loki/README.md
@@ -1,6 +1,6 @@
# loki
-  
+  
Helm chart for Grafana Loki in simple, scalable mode | unknown | [CI/CD] Update yaml file `./production/helm/loki/Chart.yaml` (+1 other) (#8923)
**Here is a summary of the updates contained in this PR:**
***
Update attribute `$.appVersion` in yaml file
`./production/helm/loki/Chart.yaml` to the following value: `2.7.5`
***
Bump version of Helm Chart
Add changelog entry to `./production/helm/loki/CHANGELOG.md`
Re-generate docs
Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com> |
564f833a307247be0373609afa2610bdffe09485 | 2022-01-05 19:02:07 | Kaviraj | Improve error message if incoming logs timestamp is far too behind. (#5040)
* Improve error message if incoming logs timestamp is far too behind.
This is part of JSON response giving out as the response to HTTP /push endpoint.
Old message
```
{"code":400,"status":"error","message":"entry for stream '{foo=\"bar\"}' has timestamp too old: 1970-01-01 01:00:00.5 +0100 CET"}
```
New message
```
{"code":400,"status":"error","message":"entry for stream '{foo=\"bar\"}' has timestamp too old: 2021-12-28 01:48:45.5 +0100 CET, accepts timestamp from: 2021-12-29 09:48:45.737756651 +0100 CET"}
```
main rationale being, hard to know what is the closest timestamp that Loki is expecting without going through the config.
Also config is not straight forward (its `latest stream's timestamp - ingester.max-chunk-age/2`)
Signed-off-by: Kaviraj <kavirajkanagaraj@gmail.com>
* Tweaks the error message
Signed-off-by: Kaviraj <kavirajkanagaraj@gmail.com>
* Fix `distributor/validator_test`.
1. Make `time.Now()` mockable.
2. Make time format consistent across error messages.
Signed-off-by: Kaviraj <kavirajkanagaraj@gmail.com>
* Rename `getValidationContextFor` to `getValidationContextForTime`
Signed-off-by: Kaviraj <kavirajkanagaraj@gmail.com> | false | diff --git a/pkg/distributor/distributor.go b/pkg/distributor/distributor.go
index 9997b637f14fc..d57d813f1f8ab 100644
--- a/pkg/distributor/distributor.go
+++ b/pkg/distributor/distributor.go
@@ -227,7 +227,7 @@ func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*log
validatedSamplesSize := 0
validatedSamplesCount := 0
- validationContext := d.validator.getValidationContextFor(userID)
+ validationContext := d.validator.getValidationContextForTime(time.Now(), userID)
for _, stream := range req.Streams {
// Truncate first so subsequent steps have consistent line lengths
diff --git a/pkg/distributor/distributor_test.go b/pkg/distributor/distributor_test.go
index 7eeb8d538919f..77b7a93975664 100644
--- a/pkg/distributor/distributor_test.go
+++ b/pkg/distributor/distributor_test.go
@@ -146,7 +146,7 @@ func Benchmark_SortLabelsOnPush(b *testing.B) {
d := prepare(&testing.T{}, limits, nil, func(addr string) (ring_client.PoolClient, error) { return ingester, nil })
defer services.StopAndAwaitTerminated(context.Background(), d) //nolint:errcheck
request := makeWriteRequest(10, 10)
- vCtx := d.validator.getValidationContextFor("123")
+ vCtx := d.validator.getValidationContextForTime(testTime, "123")
for n := 0; n < b.N; n++ {
stream := request.Streams[0]
stream.Labels = `{buzz="f", a="b"}`
diff --git a/pkg/distributor/validator.go b/pkg/distributor/validator.go
index c279c74625733..f69dc5c18e132 100644
--- a/pkg/distributor/validator.go
+++ b/pkg/distributor/validator.go
@@ -13,6 +13,10 @@ import (
"github.com/grafana/loki/pkg/validation"
)
+const (
+ timeFormat = time.RFC3339
+)
+
type Validator struct {
Limits
}
@@ -39,8 +43,7 @@ type validationContext struct {
userID string
}
-func (v Validator) getValidationContextFor(userID string) validationContext {
- now := time.Now()
+func (v Validator) getValidationContextForTime(now time.Time, userID string) validationContext {
return validationContext{
userID: userID,
rejectOldSample: v.RejectOldSamples(userID),
@@ -57,16 +60,21 @@ func (v Validator) getValidationContextFor(userID string) validationContext {
// ValidateEntry returns an error if the entry is invalid
func (v Validator) ValidateEntry(ctx validationContext, labels string, entry logproto.Entry) error {
ts := entry.Timestamp.UnixNano()
+
+ // Makes time string on the error message formatted consistently.
+ formatedEntryTime := entry.Timestamp.Format(timeFormat)
+ formatedRejectMaxAgeTime := time.Unix(0, ctx.rejectOldSampleMaxAge).Format(timeFormat)
+
if ctx.rejectOldSample && ts < ctx.rejectOldSampleMaxAge {
validation.DiscardedSamples.WithLabelValues(validation.GreaterThanMaxSampleAge, ctx.userID).Inc()
validation.DiscardedBytes.WithLabelValues(validation.GreaterThanMaxSampleAge, ctx.userID).Add(float64(len(entry.Line)))
- return httpgrpc.Errorf(http.StatusBadRequest, validation.GreaterThanMaxSampleAgeErrorMsg, labels, entry.Timestamp)
+ return httpgrpc.Errorf(http.StatusBadRequest, validation.GreaterThanMaxSampleAgeErrorMsg, labels, formatedEntryTime, formatedRejectMaxAgeTime)
}
if ts > ctx.creationGracePeriod {
validation.DiscardedSamples.WithLabelValues(validation.TooFarInFuture, ctx.userID).Inc()
validation.DiscardedBytes.WithLabelValues(validation.TooFarInFuture, ctx.userID).Add(float64(len(entry.Line)))
- return httpgrpc.Errorf(http.StatusBadRequest, validation.TooFarInFutureErrorMsg, labels, entry.Timestamp)
+ return httpgrpc.Errorf(http.StatusBadRequest, validation.TooFarInFutureErrorMsg, labels, formatedEntryTime)
}
if maxSize := ctx.maxLineSize; maxSize != 0 && len(entry.Line) > maxSize {
diff --git a/pkg/distributor/validator_test.go b/pkg/distributor/validator_test.go
index af3625e347917..1426cfc6816a9 100644
--- a/pkg/distributor/validator_test.go
+++ b/pkg/distributor/validator_test.go
@@ -59,14 +59,20 @@ func TestValidator_ValidateEntry(t *testing.T) {
},
},
logproto.Entry{Timestamp: testTime.Add(-time.Hour * 5), Line: "test"},
- httpgrpc.Errorf(http.StatusBadRequest, validation.GreaterThanMaxSampleAgeErrorMsg, testStreamLabels, testTime.Add(-time.Hour*5)),
+ httpgrpc.Errorf(
+ http.StatusBadRequest,
+ validation.GreaterThanMaxSampleAgeErrorMsg,
+ testStreamLabels,
+ testTime.Add(-time.Hour*5).Format(timeFormat),
+ testTime.Add(-1*time.Hour).Format(timeFormat), // same as RejectOldSamplesMaxAge
+ ),
},
{
"test too new",
"test",
nil,
logproto.Entry{Timestamp: testTime.Add(time.Hour * 5), Line: "test"},
- httpgrpc.Errorf(http.StatusBadRequest, validation.TooFarInFutureErrorMsg, testStreamLabels, testTime.Add(time.Hour*5)),
+ httpgrpc.Errorf(http.StatusBadRequest, validation.TooFarInFutureErrorMsg, testStreamLabels, testTime.Add(time.Hour*5).Format(timeFormat)),
},
{
"line too long",
@@ -89,7 +95,7 @@ func TestValidator_ValidateEntry(t *testing.T) {
v, err := NewValidator(o)
assert.NoError(t, err)
- err = v.ValidateEntry(v.getValidationContextFor(tt.userID), testStreamLabels, tt.entry)
+ err = v.ValidateEntry(v.getValidationContextForTime(testTime, tt.userID), testStreamLabels, tt.entry)
assert.Equal(t, tt.expected, err)
})
}
@@ -190,7 +196,7 @@ func TestValidator_ValidateLabels(t *testing.T) {
v, err := NewValidator(o)
assert.NoError(t, err)
- err = v.ValidateLabels(v.getValidationContextFor(tt.userID), mustParseLabels(tt.labels), logproto.Stream{Labels: tt.labels})
+ err = v.ValidateLabels(v.getValidationContextForTime(testTime, tt.userID), mustParseLabels(tt.labels), logproto.Stream{Labels: tt.labels})
assert.Equal(t, tt.expected, err)
})
}
diff --git a/pkg/validation/validate.go b/pkg/validation/validate.go
index d8fb7903afee7..e38f1bd3376e7 100644
--- a/pkg/validation/validate.go
+++ b/pkg/validation/validate.go
@@ -34,7 +34,7 @@ const (
TooFarBehind = "too_far_behind"
// GreaterThanMaxSampleAge is a reason for discarding log lines which are older than the current time - `reject_old_samples_max_age`
GreaterThanMaxSampleAge = "greater_than_max_sample_age"
- GreaterThanMaxSampleAgeErrorMsg = "entry for stream '%s' has timestamp too old: %v"
+ GreaterThanMaxSampleAgeErrorMsg = "entry for stream '%s' has timestamp too old: %v, oldest acceptable timestamp is: %v"
// TooFarInFuture is a reason for discarding log lines which are newer than the current time + `creation_grace_period`
TooFarInFuture = "too_far_in_future"
TooFarInFutureErrorMsg = "entry for stream '%s' has timestamp too new: %v" | unknown | Improve error message if incoming logs timestamp is far too behind. (#5040)
* Improve error message if incoming logs timestamp is far too behind.
This is part of JSON response giving out as the response to HTTP /push endpoint.
Old message
```
{"code":400,"status":"error","message":"entry for stream '{foo=\"bar\"}' has timestamp too old: 1970-01-01 01:00:00.5 +0100 CET"}
```
New message
```
{"code":400,"status":"error","message":"entry for stream '{foo=\"bar\"}' has timestamp too old: 2021-12-28 01:48:45.5 +0100 CET, accepts timestamp from: 2021-12-29 09:48:45.737756651 +0100 CET"}
```
main rationale being, hard to know what is the closest timestamp that Loki is expecting without going through the config.
Also config is not straight forward (its `latest stream's timestamp - ingester.max-chunk-age/2`)
Signed-off-by: Kaviraj <kavirajkanagaraj@gmail.com>
* Tweaks the error message
Signed-off-by: Kaviraj <kavirajkanagaraj@gmail.com>
* Fix `distributor/validator_test`.
1. Make `time.Now()` mockable.
2. Make time format consistent across error messages.
Signed-off-by: Kaviraj <kavirajkanagaraj@gmail.com>
* Rename `getValidationContextFor` to `getValidationContextForTime`
Signed-off-by: Kaviraj <kavirajkanagaraj@gmail.com> |
00da1ca140d74a1b54104fda74042f5fa77135d8 | 2025-01-23 04:09:01 | renovate[bot] | fix(deps): update module github.com/minio/minio-go/v7 to v7.0.84 (main) (#15890)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Paul Rogers <129207811+paul1r@users.noreply.github.com> | false | diff --git a/go.mod b/go.mod
index f2788fd428fa5..bff55553c5ef8 100644
--- a/go.mod
+++ b/go.mod
@@ -68,7 +68,7 @@ require (
github.com/klauspost/pgzip v1.2.6
github.com/leodido/go-syslog/v4 v4.2.0
github.com/mattn/go-ieproxy v0.0.12
- github.com/minio/minio-go/v7 v7.0.83
+ github.com/minio/minio-go/v7 v7.0.84
github.com/mitchellh/go-wordwrap v1.0.1
github.com/mitchellh/mapstructure v1.5.1-0.20220423185008-bf980b35cac4
github.com/modern-go/reflect2 v1.0.2
diff --git a/go.sum b/go.sum
index 01c676221d6b8..ddb151204c531 100644
--- a/go.sum
+++ b/go.sum
@@ -848,8 +848,8 @@ github.com/miekg/dns v1.1.62 h1:cN8OuEF1/x5Rq6Np+h1epln8OiyPWV+lROx9LxcGgIQ=
github.com/miekg/dns v1.1.62/go.mod h1:mvDlcItzm+br7MToIKqkglaGhlFMHJ9DTNNWONWXbNQ=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
-github.com/minio/minio-go/v7 v7.0.83 h1:W4Kokksvlz3OKf3OqIlzDNKd4MERlC2oN8YptwJ0+GA=
-github.com/minio/minio-go/v7 v7.0.83/go.mod h1:57YXpvc5l3rjPdhqNrDsvVlY0qPI6UTk1bflAe+9doY=
+github.com/minio/minio-go/v7 v7.0.84 h1:D1HVmAF8JF8Bpi6IU4V9vIEj+8pc+xU88EWMs2yed0E=
+github.com/minio/minio-go/v7 v7.0.84/go.mod h1:57YXpvc5l3rjPdhqNrDsvVlY0qPI6UTk1bflAe+9doY=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ=
diff --git a/vendor/github.com/minio/minio-go/v7/api-copy-object.go b/vendor/github.com/minio/minio-go/v7/api-copy-object.go
index 0c95d91ec7619..b6cadc86a929a 100644
--- a/vendor/github.com/minio/minio-go/v7/api-copy-object.go
+++ b/vendor/github.com/minio/minio-go/v7/api-copy-object.go
@@ -68,7 +68,7 @@ func (c *Client) CopyObject(ctx context.Context, dst CopyDestOptions, src CopySr
Bucket: dst.Bucket,
Key: dst.Object,
LastModified: cpObjRes.LastModified,
- ETag: trimEtag(resp.Header.Get("ETag")),
+ ETag: trimEtag(cpObjRes.ETag),
VersionID: resp.Header.Get(amzVersionID),
Expiration: expTime,
ExpirationRuleID: ruleID,
diff --git a/vendor/github.com/minio/minio-go/v7/api.go b/vendor/github.com/minio/minio-go/v7/api.go
index cb46816d0db32..cc0ded2c7f2d5 100644
--- a/vendor/github.com/minio/minio-go/v7/api.go
+++ b/vendor/github.com/minio/minio-go/v7/api.go
@@ -133,7 +133,7 @@ type Options struct {
// Global constants.
const (
libraryName = "minio-go"
- libraryVersion = "v7.0.83"
+ libraryVersion = "v7.0.84"
)
// User Agent should always following the below style.
diff --git a/vendor/github.com/minio/minio-go/v7/pkg/credentials/iam_aws.go b/vendor/github.com/minio/minio-go/v7/pkg/credentials/iam_aws.go
index 0ba06e710662c..e3230bb186dab 100644
--- a/vendor/github.com/minio/minio-go/v7/pkg/credentials/iam_aws.go
+++ b/vendor/github.com/minio/minio-go/v7/pkg/credentials/iam_aws.go
@@ -153,9 +153,6 @@ func (m *IAM) RetrieveWithCredContext(cc *CredContext) (Value, error) {
}
endpoint := m.Endpoint
- if endpoint == "" {
- endpoint = cc.Endpoint
- }
switch {
case identityFile != "":
diff --git a/vendor/github.com/minio/minio-go/v7/pkg/s3utils/utils.go b/vendor/github.com/minio/minio-go/v7/pkg/s3utils/utils.go
index 0e63ce2f7dc57..80fd029d83434 100644
--- a/vendor/github.com/minio/minio-go/v7/pkg/s3utils/utils.go
+++ b/vendor/github.com/minio/minio-go/v7/pkg/s3utils/utils.go
@@ -118,53 +118,53 @@ func GetRegionFromURL(endpointURL url.URL) string {
if endpointURL == sentinelURL {
return ""
}
- if endpointURL.Host == "s3-external-1.amazonaws.com" {
+ if endpointURL.Hostname() == "s3-external-1.amazonaws.com" {
return ""
}
// if elb's are used we cannot calculate which region it may be, just return empty.
- if elbAmazonRegex.MatchString(endpointURL.Host) || elbAmazonCnRegex.MatchString(endpointURL.Host) {
+ if elbAmazonRegex.MatchString(endpointURL.Hostname()) || elbAmazonCnRegex.MatchString(endpointURL.Hostname()) {
return ""
}
// We check for FIPS dualstack matching first to avoid the non-greedy
// regex for FIPS non-dualstack matching a dualstack URL
- parts := amazonS3HostFIPSDualStack.FindStringSubmatch(endpointURL.Host)
+ parts := amazonS3HostFIPSDualStack.FindStringSubmatch(endpointURL.Hostname())
if len(parts) > 1 {
return parts[1]
}
- parts = amazonS3HostFIPS.FindStringSubmatch(endpointURL.Host)
+ parts = amazonS3HostFIPS.FindStringSubmatch(endpointURL.Hostname())
if len(parts) > 1 {
return parts[1]
}
- parts = amazonS3HostDualStack.FindStringSubmatch(endpointURL.Host)
+ parts = amazonS3HostDualStack.FindStringSubmatch(endpointURL.Hostname())
if len(parts) > 1 {
return parts[1]
}
- parts = amazonS3HostHyphen.FindStringSubmatch(endpointURL.Host)
+ parts = amazonS3HostHyphen.FindStringSubmatch(endpointURL.Hostname())
if len(parts) > 1 {
return parts[1]
}
- parts = amazonS3ChinaHost.FindStringSubmatch(endpointURL.Host)
+ parts = amazonS3ChinaHost.FindStringSubmatch(endpointURL.Hostname())
if len(parts) > 1 {
return parts[1]
}
- parts = amazonS3ChinaHostDualStack.FindStringSubmatch(endpointURL.Host)
+ parts = amazonS3ChinaHostDualStack.FindStringSubmatch(endpointURL.Hostname())
if len(parts) > 1 {
return parts[1]
}
- parts = amazonS3HostDot.FindStringSubmatch(endpointURL.Host)
+ parts = amazonS3HostDot.FindStringSubmatch(endpointURL.Hostname())
if len(parts) > 1 {
return parts[1]
}
- parts = amazonS3HostPrivateLink.FindStringSubmatch(endpointURL.Host)
+ parts = amazonS3HostPrivateLink.FindStringSubmatch(endpointURL.Hostname())
if len(parts) > 1 {
return parts[1]
}
diff --git a/vendor/modules.txt b/vendor/modules.txt
index af0135057ee65..c1ca114153258 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -1276,7 +1276,7 @@ github.com/miekg/dns
# github.com/minio/md5-simd v1.1.2
## explicit; go 1.14
github.com/minio/md5-simd
-# github.com/minio/minio-go/v7 v7.0.83
+# github.com/minio/minio-go/v7 v7.0.84
## explicit; go 1.22
github.com/minio/minio-go/v7
github.com/minio/minio-go/v7/pkg/cors | fix | update module github.com/minio/minio-go/v7 to v7.0.84 (main) (#15890)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Paul Rogers <129207811+paul1r@users.noreply.github.com> |
02b074d88924e128bbf19902b3eb6846ccef8e75 | 2023-11-22 19:53:53 | Bilal Khan | Added a new paragraph in the contribution guide about an error because it occurred to me and may occur to others also. (#11131)
**What this PR does / why we need it**:
This PR contains a paragraph in which there is a remember guide when
running the `make docs` command. For some users, it may give an error
because they have not added the `/tmp` path into the Docker settings.
Without this setting, running the
`http://localhost:3002/docs/loki/latest/` URL won't work.
That's why I added this guide so it becomes easy for others also.
**Which issue(s) this PR fixes**:
Fixes #<issue number>
**Special notes for your reviewer**:
**Checklist**
- [x] Reviewed the
[`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md)
guide (**required**)
- [x] Documentation added
- [ ] Tests updated
- [ ] `CHANGELOG.md` updated
- [ ] If the change is worth mentioning in the release notes, add
`add-to-release-notes` label
- [ ] Changes that require user attention or interaction to upgrade are
documented in `docs/sources/setup/upgrade/_index.md`
- [ ] For Helm chart changes bump the Helm chart version in
`production/helm/loki/Chart.yaml` and update
`production/helm/loki/CHANGELOG.md` and
`production/helm/loki/README.md`. [Example
PR](https://github.com/grafana/loki/commit/d10549e3ece02120974929894ee333d07755d213)
- [ ] If the change is deprecating or removing a configuration option,
update the `deprecated-config.yaml` and `deleted-config.yaml` files
respectively in the `tools/deprecated-config-checker` directory.
[Example
PR](https://github.com/grafana/loki/pull/10840/commits/0d4416a4b03739583349934b96f272fb4f685d15)
---------
Signed-off-by: Bilal Khan <bilalkhanrecovered@gmail.com>
Co-authored-by: J Stickler <julie.stickler@grafana.com>
Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com>
Co-authored-by: Jack Baldry <jack.baldry@grafana.com> | false | diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 5226d96ed37c4..b643a46ddf6f9 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -149,4 +149,11 @@ To get a local preview of the documentation:
3. Run the command `make docs`. This uses the `grafana/docs` image which internally uses Hugo to generate the static site.
4. Open http://localhost:3002/docs/loki/latest/ to review your changes.
+**Remember:** If running `make docs` command gave you the following error.
+
+ - `path /tmp/make-docs.Dcq is not shared from the host and is not known to Docker.`
+ - `You can configure shared paths from Docker -> Preferences... -> Resources -> File Sharing.`
+
+Then you can go to Docker Desktop settings and open the resources, add the temporary directory path `/tmp`.
+
> Note that `make docs` uses a lot of memory. If it crashes, increase the memory allocated to Docker and try again. | unknown | Added a new paragraph in the contribution guide about an error because it occurred to me and may occur to others also. (#11131)
**What this PR does / why we need it**:
This PR contains a paragraph in which there is a remember guide when
running the `make docs` command. For some users, it may give an error
because they have not added the `/tmp` path into the Docker settings.
Without this setting, running the
`http://localhost:3002/docs/loki/latest/` URL won't work.
That's why I added this guide so it becomes easy for others also.
**Which issue(s) this PR fixes**:
Fixes #<issue number>
**Special notes for your reviewer**:
**Checklist**
- [x] Reviewed the
[`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md)
guide (**required**)
- [x] Documentation added
- [ ] Tests updated
- [ ] `CHANGELOG.md` updated
- [ ] If the change is worth mentioning in the release notes, add
`add-to-release-notes` label
- [ ] Changes that require user attention or interaction to upgrade are
documented in `docs/sources/setup/upgrade/_index.md`
- [ ] For Helm chart changes bump the Helm chart version in
`production/helm/loki/Chart.yaml` and update
`production/helm/loki/CHANGELOG.md` and
`production/helm/loki/README.md`. [Example
PR](https://github.com/grafana/loki/commit/d10549e3ece02120974929894ee333d07755d213)
- [ ] If the change is deprecating or removing a configuration option,
update the `deprecated-config.yaml` and `deleted-config.yaml` files
respectively in the `tools/deprecated-config-checker` directory.
[Example
PR](https://github.com/grafana/loki/pull/10840/commits/0d4416a4b03739583349934b96f272fb4f685d15)
---------
Signed-off-by: Bilal Khan <bilalkhanrecovered@gmail.com>
Co-authored-by: J Stickler <julie.stickler@grafana.com>
Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com>
Co-authored-by: Jack Baldry <jack.baldry@grafana.com> |
1086783a1d8886f0e6888289975e771e18d800e6 | 2024-06-18 00:30:02 | Owen Diehl | fix: separates directory creation from permission checks (#13248) | false | diff --git a/pkg/storage/chunk/client/util/util.go b/pkg/storage/chunk/client/util/util.go
index 3485552c220fd..7b62b475caaf7 100644
--- a/pkg/storage/chunk/client/util/util.go
+++ b/pkg/storage/chunk/client/util/util.go
@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"io"
+ "io/fs"
"os"
ot "github.com/opentracing/opentracing-go"
@@ -67,17 +68,31 @@ func DoParallelQueries(
// EnsureDirectory makes sure directory is there, if not creates it if not
func EnsureDirectory(dir string) error {
+ return EnsureDirectoryWithDefaultPermissions(dir, 0o777)
+}
+
+func EnsureDirectoryWithDefaultPermissions(dir string, mode fs.FileMode) error {
info, err := os.Stat(dir)
if os.IsNotExist(err) {
- return os.MkdirAll(dir, 0o777)
+ return os.MkdirAll(dir, mode)
} else if err == nil && !info.IsDir() {
return fmt.Errorf("not a directory: %s", dir)
- } else if err == nil && info.Mode()&0700 != 0700 {
- return fmt.Errorf("insufficient permissions: %s %s", dir, info.Mode())
}
return err
}
+func RequirePermissions(path string, required fs.FileMode) error {
+ info, err := os.Stat(path)
+ if err != nil {
+ return err
+ }
+
+ if mode := info.Mode(); mode&required != required {
+ return fmt.Errorf("insufficient permissions for path %s: required %s but found %s", path, required.String(), mode.String())
+ }
+ return nil
+}
+
// ReadCloserWithContextCancelFunc helps with cancelling the context when closing a ReadCloser.
// NOTE: The consumer of ReadCloserWithContextCancelFunc should always call the Close method when it is done reading which otherwise could cause a resource leak.
type ReadCloserWithContextCancelFunc struct {
diff --git a/pkg/storage/chunk/client/util/util_test.go b/pkg/storage/chunk/client/util/util_test.go
new file mode 100644
index 0000000000000..360194cefa160
--- /dev/null
+++ b/pkg/storage/chunk/client/util/util_test.go
@@ -0,0 +1,30 @@
+package util
+
+import (
+ "os"
+ "path/filepath"
+ "testing"
+
+ "github.com/stretchr/testify/require"
+)
+
+func TestEnsureDir(t *testing.T) {
+ tmpDir := t.TempDir()
+
+ // Directory to be created by EnsureDir
+ dirPath := filepath.Join(tmpDir, "testdir")
+
+ // Ensure the directory does not exist before the test
+ if _, err := os.Stat(dirPath); !os.IsNotExist(err) {
+ t.Fatalf("Directory already exists: %v", err)
+ }
+
+ // create with default permissions
+ require.NoError(t, EnsureDirectoryWithDefaultPermissions(dirPath, 0o640))
+
+ // ensure the directory passes the permission check for more restrictive permissions
+ require.NoError(t, RequirePermissions(dirPath, 0o600))
+
+ // ensure the directory fails the permission check for less restrictive permissions
+ require.Error(t, RequirePermissions(dirPath, 0o660))
+}
diff --git a/pkg/storage/stores/shipper/bloomshipper/store.go b/pkg/storage/stores/shipper/bloomshipper/store.go
index 8b2ff3365d766..f2c77d7ac74e0 100644
--- a/pkg/storage/stores/shipper/bloomshipper/store.go
+++ b/pkg/storage/stores/shipper/bloomshipper/store.go
@@ -324,6 +324,9 @@ func NewBloomStore(
if err := util.EnsureDirectory(wd); err != nil {
return nil, errors.Wrapf(err, "failed to create working directory for bloom store: '%s'", wd)
}
+ if err := util.RequirePermissions(wd, 0o700); err != nil {
+ return nil, errors.Wrapf(err, "insufficient permissions on working directory for bloom store: '%s'", wd)
+ }
}
for _, periodicConfig := range periodicConfigs { | fix | separates directory creation from permission checks (#13248) |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 36