question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
source_dataset = tf.data.TextLineDataset('primary.csv')
target_dataset = tf.data.TextLineDataset('secondary.csv')
dataset = tf.data.Dataset.zip((source_dataset, target_dataset))
dataset = dataset.shard(10000, 0)
dataset = dataset.map(lambda source, target: (tf.string_to_number(tf.string_split([source], delimiter=',').values, tf.int32),
tf.string_to_number(tf.string_split([target], delimiter=',').values, tf.int32)))
dataset = dataset.map(lambda source, target: (source, tf.concat(([start_token], target), axis=0), tf.concat((target, [end_token]), axis=0)))
dataset = dataset.map(lambda source, target_in, target_out: (source, tf.size(source), target_in, target_out, tf.size(target_in)))
dataset = dataset.shuffle(NUM_SAMPLES) #This is the important line of code
I would like to shuffle my entire dataset fully, but shuffle() requires a number of samples to pull, and tf.Size() does not work with tf.data.Dataset.
How can I shuffle properly?
| I was working with tf.data.FixedLengthRecordDataset() and ran into a similar problem.
In my case, I was trying to only take a certain percentage of the raw data.
Since I knew all the records have a fixed length, a workaround for me was:
totalBytes = sum([os.path.getsize(os.path.join(filepath, filename)) for filename in os.listdir(filepath)])
numRecordsToTake = tf.cast(0.01 * percentage * totalBytes / bytesPerRecord, tf.int64)
dataset = tf.data.FixedLengthRecordDataset(filenames, recordBytes).take(numRecordsToTake)
In your case, my suggestion would be to count directly in python the number of records in 'primary.csv' and 'secondary.csv'. Alternatively, I think for your purpose, to set the buffer_size argument doesn't really require counting the files. According to the accepted answer about the meaning of buffer_size, a number that's greater than the number of elements in the dataset will ensure a uniform shuffle across the whole dataset. So just putting in a really big number (that you think will surpass the dataset size) should work.
| DataSet | 47,735,896 | 10 |
I have a very big table of time series data that have these columns:
Timestamp
LicensePlate
UberRide#
Speed
Each collection of LicensePlate/UberRide data should be processed considering the whole set of data. In others words, I do not need to proccess the data row by row, but all rows grouped by (LicensePlate/UberRide) together.
I am planning to use spark with dataframe api, but I am confused on how can I perform a custom calculation over spark grouped dataframe.
What I need to do is:
Get all data
Group by some columns
Foreach spark dataframe group apply a f(x). Return a custom object foreach group
Get the results by applying g(x) and returning a single custom object
How can I do steps 3 and 4? Any hints over which spark API (dataframe, dataset, rdd, maybe pandas...) should I use?
The whole workflow can be seen below:
| What you are looking for exists since Spark 2.3: Pandas vectorized UDFs. It allows to group a DataFrame and apply custom transformations with pandas, distributed on each group:
df.groupBy("groupColumn").apply(myCustomPandasTransformation)
It is very easy to use so I will just put a link to Databricks' presentation of pandas UDF.
However, I don't know such a practical way to make grouped transformations in Scala yet, so any additional advice is welcome.
EDIT: in Scala, you can achieve the same thing since earlier versions of Spark, using Dataset's groupByKey + mapGroups/flatMapGroups.
| DataSet | 39,600,160 | 10 |
I´m tryng to make a bar chart in chart.js (using chart.js 2.2.2)
I´m in trouble trying to put new datasets in a chart
How can i put a new Dataset "Vendas" with data: [10,20,30,40,50,60,70]
var data = {
labels: ["January", "February", "March", "April", "May", "June", "July"],
datasets: [
{
label: "Compras",
backgroundColor: [
'rgba(255, 99, 132, 0.2)',
'rgba(54, 162, 235, 0.2)',
'rgba(255, 206, 86, 0.2)',
'rgba(75, 192, 192, 0.2)',
'rgba(153, 102, 255, 0.2)',
'rgba(255, 159, 64, 0.2)'
],
borderColor: [
'rgba(255,99,132,1)',
'rgba(54, 162, 235, 1)',
'rgba(255, 206, 86, 1)',
'rgba(75, 192, 192, 1)',
'rgba(153, 102, 255, 1)',
'rgba(255, 159, 64, 1)'
],
borderWidth: 1,
data: [65, 59, 80, 81, 56, 55, 40],
}
]
};
var ctx = $("#barOrgaoAno").get(0).getContext("2d");
var myBarChart = new Chart(ctx,{
type: "bar",
data: data,
});
I tried two examples i got in the internet but i can´t get neither to work
Exemple 1
barChartDemo.addData([dData()], "dD " + index);
Exemple2
JSFLiddle
var myNewDataset = {
label: "My Second dataset",
fillColor: "rgba(187,205,151,0.5)",
strokeColor: "rgba(187,205,151,0.8)",
highlightFill: "rgba(187,205,151,0.75)",
highlightStroke: "rgba(187,205,151,1)",
data: [48, 40, 19, 86, 27, 90, 28]
}
var bars = []
myNewDataset.data.forEach(function (value, i) {
bars.push(new myBarChart.BarClass({
value: value,
label: myBarChart.datasets[0].bars[i].label,
x: myBarChart.scale.calculateBarX(myBarChart.datasets.length + 1, myBarChart.datasets.length, i),
y: myBarChart.scale.endPoint,
width: myBarChart.scale.calculateBarWidth(myBarChart.datasets.length + 1),
base: myBarChart.scale.endPoint,
strokeColor: myNewDataset.strokeColor,
fillColor: myNewDataset.fillColor
}))
})
myBarChart.datasets.push({
bars: bars
})
myBarChart.update();
| Since you store your chart data in a variable (called data in your code), you can do it with a simple function on a button :
$('button').click(function() {
// You create the new dataset `Vendas` with new data and color to differentiate
var newDataset = {
label: "Vendas",
backgroundColor: 'rgba(99, 255, 132, 0.2)',
borderColor: 'rgba(99, 255, 132, 1)',
borderWidth: 1,
data: [10, 20, 30, 40, 50, 60, 70],
}
// You add the newly created dataset to the list of `data`
data.datasets.push(newDataset);
// You update the chart to take into account the new dataset
myBarChart.update();
});
You can see the full code on this jsFiddle and here is its result after a click :
| DataSet | 39,475,891 | 10 |
I'm currently working on a project using sigma.js where I need to show a large number of nodes and edges (~10000 to ~100000 of each one) stored in a JSON file. But the library is getting laggy when I load the JSON, on each refresh and also when it shows me the graph it doesn't space the nodes. I was wondering if someone knows how to represent this kind of dataset fine.
| To be honest i am on the same issue if it can helps, i think the book example of Gephi.org tutorial is still the best.
For the moment i am on including sigmaJS from cloudflare but i don't have any proposition for this.
Something like you replace the library with a link to cloudflare like the one used for phaser.io explained in it git repository or even tweenJS use the same way (it's the way of web devs see #indiedev #indiegamedev on twitter, it helps)
<script src="sigma.min.js"></script>
<script src="sigma.parsers.json.min.js"></script>
http://jsfiddle.net/thefailtheory/L45ue3er/
| DataSet | 36,543,964 | 10 |
I'm trying to create a local SQL Server Reporting Services report (.rdlc file) and connect this report to some data sets that I generate in code (no direct SQL Server connection).
I create a ReportDataProvider class with some instance methods that return IList<T> for various sets of criteria - but I cannot seem to find a way to make those data providing methods show up in the Reporting Services designer inside Visual Studio 2013.
When I look at the dialog that appears after clicking on Add DataSet on the Datasets node in the Report Data explorer window, I see a ton of my classes listed there - but not my data provider class.
Is there anything special I need to be aware of (make the class static? Decorate it with some attribute?) in order for it to show up in that dropdown list of possible data sources? I tried various things, but have failed to find any way to get this to work properly...
| I do some research, and try different ways to add classes. Unfortunatly it happends that you can't see static classes in this designer. I tried different ways but no luck.
For non static classes this manual works for me every time, even with Interfaces like IList, but i don't represent it here:
Make sure that namespace with your Data Report Classes available in your project with .rdlc files. Can be that you need to add reference.
Write Data Report Class and rebuild solution.
Close and reopen .rdlc files in your VS.
I using VS 2013 Ultimate Update 2.
This is my classes:
using System.Collections.Generic;
namespace YourReportNamespace
{
public class ReportClass
{
public List<string> TestReportData()
{
return new List<string>();
}
public static List<string> StaticTestReportData()
{
return new List<string>();
}
}
public class ReportWithFieldsClass
{
private List<string> Data = new List<string>();
public List<string> TestReportData()
{
return Data;
}
public List<string> TestReportData2()
{
return Data;
}
public static List<string> StaticTestReportData()
{
return new List<string>();
}
}
public static class ReportWithFieldsStaticClass //This class will not appear
{
private static List<string> Data = new List<string>();
public static List<string> StaticTestReportDataFromField()
{
return Data;
}
public static List<string> StaticTestReportData()
{
return new List<string>();
}
}
}
This is what i got in designer after i pass through manual:
| DataSet | 27,227,560 | 10 |
In ggplot2's built-in mpg dataset there is variable called "fl.", which is a factor with levels: "c", "d", "e", "p", & "r".
Does anyone know what those letters are supposed to stand for? Needless to say, googling those letters has yet to give me any relevant leads...
library(ggplot2)
data(mpg)
str(mpg)
?mpg
[Note: There was a similar question on SO re: the mtcars dataset, which gave me the impression that this would be an appropriate forum for this sort of question.]
| The fuel:
e: ethanol E85, note(subset(mpg, fl=="e") pulls up only "new" american cars, and that fuel economy is much lower than the corresponding presumably gasoline models, which lines up with the lower energy content of ethanol)
d: diesel
r: regular
p: premium
c: CNG (note as far as I know the civic is basically the only passenger car that runs on CNG in the US).
Note, I have no reason to know this other than an educated guess based on the rest of the data, but here is some graphical evidence:
ggplot(mpg, aes(x=fl, y=hwy)) + geom_boxplot() + facet_wrap(~cyl, nrow=1)
Notice how e is consistently low d is consistently high at least where there is more than 1 data point (diesel has higher energy content) and p is consistently higher than r (premium allows cars to run at higher compression ratios and efficiency, though actually premium has lower energy content than regular) for each cylinder category (facets are # of cylinders).
UPDATE: as per @naught101, this now appears to be documented.
| DataSet | 25,548,656 | 10 |
I have a dataset wherein I am trying to determine the number of risk factors per person. So I have the following data:
Person_ID Age Smoker Diabetes
001 30 Y N
002 45 N N
003 27 N Y
004 18 Y Y
005 55 Y Y
Each attribute (Age, Smoker, Diabetes) has its own condition to determine whether it is a risk factor. So if Age >= 45, it's a risk factor. Smoker and Diabetes are risk factors if they are "Y". What I would like is to add a column that adds up the number of risk factors for each person based on those conditions. So the data would look like this:
Person_ID Age Smoker Diabetes Risk_Factors
001 30 Y N 1
002 25 N N 0
003 27 N Y 1
004 18 Y Y 2
005 55 Y Y 3
I have a sample dataset that I was fooling around with in Excel, and the way I did it there was to use the COUNTIF formula like so:
=COUNTIF(B2,">45") + COUNTIF(C2,"=Y") + COUNTIF(D2,"=Y")
However, the actual dataset that I will be using is way too large for Excel, so I'm learning pandas for python. I wish I could provide examples of what I've already tried, but frankly I don't even know where to start. I looked at this question, but it doesn't really address what to do about applying it to an entire new column using different conditions from multiple columns. Any suggestions?
| I would do this the following way.
For each column, create a new boolean series using the column's condition
Add those series row-wise
(Note that this is simpler if your Smoker and Diabetes column is already boolean (True/False) instead of in strings.)
It might look like this:
df = pd.DataFrame({'Age': [30,45,27,18,55],
'Smoker':['Y','N','N','Y','Y'],
'Diabetes': ['N','N','Y','Y','Y']})
Age Diabetes Smoker
0 30 N Y
1 45 N N
2 27 Y N
3 18 Y Y
4 55 Y Y
#Step 1
risk1 = df.Age > 45
risk2 = df.Smoker == "Y"
risk3 = df.Diabetes == "Y"
risk_df = pd.concat([risk1,risk2,risk3],axis=1)
Age Smoker Diabetes
0 False True False
1 False False False
2 False False True
3 False True True
4 True True True
df['Risk_Factors'] = risk_df.sum(axis=1)
Age Diabetes Smoker Risk_Factors
0 30 N Y 1
1 45 N N 0
2 27 Y N 1
3 18 Y Y 2
4 55 Y Y 3
| DataSet | 24,810,526 | 10 |
I need to read a data file into R for my assignment. You can download it from the following site.
http://archive.ics.uci.edu/ml/datasets/Acute+Inflammations
The data file ends with an extension .data which I never see before. I tried read.table and alike but could not read it into R properly. Can anyone help me with this, please?
| It's a UTF-16 little endian file with a byte order mark at the beginning. read.table will fail unless you specify the correct encoding. This works for me on MacOS. Decimals are indicated by a comma.
read.table("diagnosis.data", fileEncoding="UTF-16", dec=",")
V1 V2 V3 V4 V5 V6 V7 V8
1 35.5 no yes no no no no no
2 35.9 no no yes yes yes yes no
3 35.9 no yes no no no no no
| DataSet | 21,101,927 | 10 |
Now I use method in C# to read table from SQLite database into DataTable,
but I want to send all table into other object.
So I think I have to use DataSet to combine all DataTable(s)
and send it to object as parameter.
Is there method that easy read all tables from SQLite database to DataSet?
Or I have to read all tables from SQLite database to DataTable each table
and combine to DataSet by hand?
| The sql for listing all the tables is:
SELECT name FROM sqlite_master WHERE type = 'table' ORDER BY 1
you could then get all the tables as databases seperately and then add them into a dataset - an example here: http://www.dotnetperls.com/dataset
so i guess the code would be something like:
Dataset d = new Dataset()
foreach (tableName in GetTables()){
d.Tables.Add(GetDataTable("select * from "+tableName);
}
code for GetTables and GetDataTable (i'll leave the piecing it together to you):
public ArrayList GetTables()
{
ArrayList list = new ArrayList();
// executes query that select names of all tables in master table of the database
String query = "SELECT name FROM sqlite_master " +
"WHERE type = 'table'" +
"ORDER BY 1";
try
{
DataTable table = GetDataTable(query);
// Return all table names in the ArrayList
foreach (DataRow row in table.Rows)
{
list.Add(row.ItemArray[0].ToString());
}
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
return list;
}
public DataTable GetDataTable(string sql)
{
try
{
DataTable dt = new DataTable();
using (var c = new SQLiteConnection(dbConnection))
{
c.Open();
using (SQLiteCommand cmd = new SQLiteCommand(sql, c))
{
using (SQLiteDataReader rdr = cmd.ExecuteReader())
{
dt.Load(rdr);
return dt;
}
}
}
}
catch (Exception e)
{
Console.WriteLine(e.Message);
return null;
}
}
| DataSet | 20,256,043 | 10 |
How would one add a legend to the multiline series chart? I tried but am not getting any legend to display.
The block here:
http://bl.ocks.org/3884955
has a flaw when the various series converge to the same point, like zero. All the labels will be overlayed on each other. Instead of going for these labels, a traditional legend would be useful.
I tried adding this
var legend = svg.append("g")
.attr("class", "legend")
.attr("height", 100)
.attr("width", 100)
.attr('transform', 'translate(-20,50)');
legend.selectAll('rect')
.datum(function(d) { return {name: d.name, value: d.values[d.values.length - 1]}; })
.append("rect")
.attr("x", width)
.attr("y", function(d, i){ return i * 20;})
.attr("width", 10)
.attr("height", 10)
.style("fill", function(d) {
return color.domain(d3.keys(d[0]).filter(function(key) { return key !== "day"; }));
});
legend.selectAll('text')
.datum(function(d) { return {name: d.name, value: d.values[d.values.length - 1]}; })
.append("text")
.attr("x", width)
.attr("y", function(d, i){ return i * 20 + 9;})
.text(function(d) {
return d.name;
});
to the end of the code, the key names (d.name) match how my data is formatted, but it does not display. At one point it showed all black boxes to the right of the graph so that means I am close but I am missing something important
any insight appreciated
| Here is a fixed & refactored version of your code.
var legend = svg.selectAll('g')
.data(cities)
.enter()
.append('g')
.attr('class', 'legend');
legend.append('rect')
.attr('x', width - 20)
.attr('y', function(d, i){ return i * 20;})
.attr('width', 10)
.attr('height', 10)
.style('fill', function(d) {
return color(d.name);
});
legend.append('text')
.attr('x', width - 8)
.attr('y', function(d, i){ return (i * 20) + 9;})
.text(function(d){ return d.name; });
You need to use enter(), but enter() and exit() methods cannot be used with datum(). Quoting from the d3 wiki
selection.datum([value])
Gets or sets the bound data for each selected element. Unlike the selection.data method, this method does not compute a join (and thus does not compute enter and exit selections).
| DataSet | 14,775,962 | 10 |
In DataTable I could sorting with
dataTable.DefaultView.Sort = "SortField DESC";
I'm getting a DataSet from database, I was wondering could I do a sorting on the DataSet like how I do it in DataTable.
| you can still access the DataTable from the the data set as follows,
ds.Tables[0].DefaultView.Sort =" criterian";
Hope this helps.
| DataSet | 11,029,823 | 10 |
I want to read some quite huge files(to be precise: the google ngram 1 word dataset) and count how many times a character occurs. Now I wrote this script:
import fileinput
files = ['../../datasets/googlebooks-eng-all-1gram-20090715-%i.csv' % value for value in range(0,9)]
charcounts = {}
lastfile = ''
for line in fileinput.input(files):
line = line.strip()
data = line.split('\t')
for character in list(data[0]):
if (not character in charcounts):
charcounts[character] = 0
charcounts[character] += int(data[1])
if (fileinput.filename() is not lastfile):
print(fileinput.filename())
lastfile = fileinput.filename()
if(fileinput.filelineno() % 100000 == 0):
print(fileinput.filelineno())
print(charcounts)
which works fine, until it reaches approx. line 700.000 of the first file, I then get this error:
../../datasets/googlebooks-eng-all-1gram-20090715-0.csv
100000
200000
300000
400000
500000
600000
700000
Traceback (most recent call last):
File "charactercounter.py", line 5, in <module>
for line in fileinput.input(files):
File "C:\Python31\lib\fileinput.py", line 254, in __next__
line = self.readline()
File "C:\Python31\lib\fileinput.py", line 349, in readline
self._buffer = self._file.readlines(self._bufsize)
File "C:\Python31\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 7771: cha
racter maps to <undefined>
To solve this I searched the web a bit, and came up with this code:
import fileinput
files = ['../../datasets/googlebooks-eng-all-1gram-20090715-%i.csv' % value for value in range(0,9)]
charcounts = {}
lastfile = ''
for line in fileinput.input(files,False,'',0,'r',fileinput.hook_encoded('utf-8')):
line = line.strip()
data = line.split('\t')
for character in list(data[0]):
if (not character in charcounts):
charcounts[character] = 0
charcounts[character] += int(data[1])
if (fileinput.filename() is not lastfile):
print(fileinput.filename())
lastfile = fileinput.filename()
if(fileinput.filelineno() % 100000 == 0):
print(fileinput.filelineno())
print(charcounts)
but the hook I now use tries to read the entire, 990MB, file into the memory at once, which kind of crashes my pc. Does anyone know how to rewrite this code so that it actually works?
p.s: the code hasn't even run all the way yet, so I don't even know if it does what it has to do, but for that to happen I first need to fix this bug.
Oh, and I use Python 3.2
| I do not know why fileinput does not work as expected.
I suggest you use the open function instead. The return value can be iterated over and will return lines, just like fileinput.
The code will then be something like:
for filename in files:
print(filename)
for filelineno, line in enumerate(open(filename, encoding="utf-8")):
line = line.strip()
data = line.split('\t')
# ...
Some documentation links: enumerate, open, io.TextIOWrapper (open returns an instance of TextIOWrapper).
| DataSet | 5,493,073 | 10 |
I am working with a windows application.
I load a dataset with dataadapter with fill
method.(objDataAdaptere.fill(objDataSet,"string"))
Now I want to get a cell of this dataset.(for example (row(0),cell(0)))
How do I can do this?
Thanks.
| The value? Assuming you mean "of the first table of the data-set", then:
object value = dataSet.Tables[0].Rows[0][0];
but more specifically:
object value = dataSet.Tables[tableIndex].Rows[rowIndex][colIndex];
| DataSet | 4,332,826 | 10 |
I am building a demo dataset for my webapp. I would like thousands of "real looking" names. They should not be names of famous people or fiction heroes or names that will evoke associations. They should all have various and different sounding but realistic male and female names and surnames.
Birth dates and other data can be randomly generated, but right now I am stuck with the names issue. Do you have any creative ideas for this?
UPDATE: Dave is the Winner
Sample data using the tool he suggested, identitygenerator.com (very user friendly and powerful tool):
mysql> select name, sex, dob from Customer order by rand() limit 30;
+-------------------+---------+------------+
| name | sex | dob |
+-------------------+---------+------------+
| Seth Copeland | male | 1958-03-02 |
| Nomlanga Short | female | 1993-09-15 |
| Cheryl Kerr | female | 1962-05-14 |
| Ralph Murphy | male | 1984-07-14 |
| Whilemina Sparks | female | 1975-08-07 |
| Bernard Atkins | male | 1953-02-23 |
| Kane Lowery | male | 1964-02-24 |
| Victor Johnson | unknown | 1993-05-31 |
| Lawrence Powers | male | 1965-12-24 |
| Arsenio Caldwell | male | 1965-06-29 |
| Beatrice Espinoza | female | 1976-01-09 |
| Gil Herring | unknown | 1992-10-09 |
| Nelle Rocha | female | 1956-02-29 |
| Chantale Benson | female | 1969-04-27 |
| Katell Harris | female | 1976-03-14 |
| Rajah Kline | unknown | 1974-01-19 |
| Quynn Pennington | unknown | 1950-06-22 |
| Abraham Clemons | male | 1982-07-14 |
| Coby Bird | male | 1989-03-14 |
| Caryn Buckner | unknown | 1979-12-01 |
| Kenyon Sheppard | male | 1963-02-19 |
| Dana Chandler | female | 1958-05-25 |
| Dara Hogan | female | 1983-10-22 |
| April Carroll | unknown | 1954-03-10 |
| Joan Stone | female | 1964-01-31 |
| Ella Combs | female | 1993-11-19 |
| Sacha Becker | unknown | 1964-01-06 |
| Gray Palmer | male | 1981-08-06 |
| Marny Rivers | female | 1953-06-02 |
| Dawn Hull | female | 1989-10-05 |
+-------------------+---------+------------+
30 rows in set (0.02 sec)
| There are websites which will generate fake names for you. I usually use fakenamegenerator.com but I think that only does one person at a time. identitygenerator.com has a tool which which will generate a large number of random names - and other personal information - downloadable in various formats.
| DataSet | 2,378,509 | 10 |
I have tried executing this docker command to setup Jaeger Agent and jaeger collector with elasticsearch.
sudo docker run \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-e SPAN_STORAGE_TYPE=elasticsearch \
--name=jaeger \
jaegertracing/all-in-one:latest
but this command gives the below error. How to configure Jaeger with ElasticSearch?
"msg":"Failed to init storage factory","error":"health check timeout: no Elasticsearch node available","errorVerbose":"no Elasticsearch node available\
| After searching a solution for some time, I found a docker-compose.yml file which had the Jaeger Query,Agent,collector and Elasticsearch configurations.
docker-compose.yml
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.1
networks:
- elastic-jaeger
ports:
- "127.0.0.1:9200:9200"
- "127.0.0.1:9300:9300"
restart: on-failure
environment:
- cluster.name=jaeger-cluster
- discovery.type=single-node
- http.host=0.0.0.0
- transport.host=127.0.0.1
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- xpack.security.enabled=false
volumes:
- esdata:/usr/share/elasticsearch/data
jaeger-collector:
image: jaegertracing/jaeger-collector
ports:
- "14269:14269"
- "14268:14268"
- "14267:14267"
- "9411:9411"
networks:
- elastic-jaeger
restart: on-failure
environment:
- SPAN_STORAGE_TYPE=elasticsearch
command: [
"--es.server-urls=http://elasticsearch:9200",
"--es.num-shards=1",
"--es.num-replicas=0",
"--log-level=error"
]
depends_on:
- elasticsearch
jaeger-agent:
image: jaegertracing/jaeger-agent
hostname: jaeger-agent
command: ["--collector.host-port=jaeger-collector:14267"]
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
networks:
- elastic-jaeger
restart: on-failure
environment:
- SPAN_STORAGE_TYPE=elasticsearch
depends_on:
- jaeger-collector
jaeger-query:
image: jaegertracing/jaeger-query
environment:
- SPAN_STORAGE_TYPE=elasticsearch
- no_proxy=localhost
ports:
- "16686:16686"
- "16687:16687"
networks:
- elastic-jaeger
restart: on-failure
command: [
"--es.server-urls=http://elasticsearch:9200",
"--span-storage.type=elasticsearch",
"--log-level=debug"
]
depends_on:
- jaeger-agent
volumes:
esdata:
driver: local
networks:
elastic-jaeger:
driver: bridge
The docker-compose.yml file installs the elasticsearch, Jaeger collector,query and agent.
Install docker and docker compose first
https://docs.docker.com/compose/install/#install-compose
Then, execute these commands in order
1. sudo docker-compose up -d elasticsearch
2. sudo docker-compose up -d
3. sudo docker ps -a
start all the docker containers - Jaeger agent,collector,query and elasticsearch.
sudo docker start container-id
access -> http://localhost:16686/
| OpenTracing | 51,785,812 | 18 |
I'm trying to use OpenTracing.Contrib.NetCore with Serilog. I need to send to Jaeger my custom logs. Now, it works only when I use default logger factory Microsoft.Extensions.Logging.ILoggerFactory
My Startup:
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
services.AddSingleton<ITracer>(sp =>
{
var loggerFactory = sp.GetRequiredService<ILoggerFactory>();
string serviceName = sp.GetRequiredService<IHostingEnvironment>().ApplicationName;
var samplerConfiguration = new Configuration.SamplerConfiguration(loggerFactory)
.WithType(ConstSampler.Type)
.WithParam(1);
var senderConfiguration = new Configuration.SenderConfiguration(loggerFactory)
.WithAgentHost("localhost")
.WithAgentPort(6831);
var reporterConfiguration = new Configuration.ReporterConfiguration(loggerFactory)
.WithLogSpans(true)
.WithSender(senderConfiguration);
var tracer = (Tracer)new Configuration(serviceName, loggerFactory)
.WithSampler(samplerConfiguration)
.WithReporter(reporterConfiguration)
.GetTracer();
//GlobalTracer.Register(tracer);
return tracer;
});
services.AddOpenTracing();
}
and somewhere in controller:
[Route("api/[controller]")]
public class ValuesController : ControllerBase
{
private readonly ILogger<ValuesController> _logger;
public ValuesController(ILogger<ValuesController> logger)
{
_logger = logger;
}
[HttpGet("{id}")]
public ActionResult<string> Get(int id)
{
_logger.LogWarning("Get values by id: {valueId}", id);
return "value";
}
}
in a result I will able to see that log in Jaeger UI
But when I use Serilog, there are no any custom logs. I've added UseSerilog() to WebHostBuilder, and all custom logs I can see in console but not in Jaeger.
There is open issue in github. Could you please suggest how I can use Serilog with OpenTracing?
| This is a limitation in the Serilog logger factory implementation; in particular, Serilog currently ignores added providers and assumes that Serilog Sinks will replace them instead.
So, the solutions is implementaion a simple WriteTo.OpenTracing() method to connect Serilog directly to OpenTracing
public class OpenTracingSink : ILogEventSink
{
private readonly ITracer _tracer;
private readonly IFormatProvider _formatProvider;
public OpenTracingSink(ITracer tracer, IFormatProvider formatProvider)
{
_tracer = tracer;
_formatProvider = formatProvider;
}
public void Emit(LogEvent logEvent)
{
ISpan span = _tracer.ActiveSpan;
if (span == null)
{
// Creating a new span for a log message seems brutal so we ignore messages if we can't attach it to an active span.
return;
}
var fields = new Dictionary<string, object>
{
{ "component", logEvent.Properties["SourceContext"] },
{ "level", logEvent.Level.ToString() }
};
fields[LogFields.Event] = "log";
try
{
fields[LogFields.Message] = logEvent.RenderMessage(_formatProvider);
fields["message.template"] = logEvent.MessageTemplate.Text;
if (logEvent.Exception != null)
{
fields[LogFields.ErrorKind] = logEvent.Exception.GetType().FullName;
fields[LogFields.ErrorObject] = logEvent.Exception;
}
if (logEvent.Properties != null)
{
foreach (var property in logEvent.Properties)
{
fields[property.Key] = property.Value;
}
}
}
catch (Exception logException)
{
fields["mbv.common.logging.error"] = logException.ToString();
}
span.Log(fields);
}
}
public static class OpenTracingSinkExtensions
{
public static LoggerConfiguration OpenTracing(
this LoggerSinkConfiguration loggerConfiguration,
IFormatProvider formatProvider = null)
{
return loggerConfiguration.Sink(new OpenTracingSink(GlobalTracer.Instance, formatProvider));
}
}
| OpenTracing | 56,156,809 | 18 |
I instrumented a simple Spring-Boot application with Jaeger, but when I run the application within a Docker container with docker-compose, I can't see any traces in the Jaeger frontend.
I'm creating the tracer configuration by reading the properties from environment variables that I set in the docker-compose file.
This is how I create the tracer:
Configuration config = Configuration.fromEnv();
return config.getTracer();
And this is my docker-compose file:
version: '2'
services:
demo:
build: opentracing_demo/.
ports:
- "8080:8080"
environment:
- JAEGER_SERVICE_NAME=hello_service
- JAEGER_AGENT_HOST=jaeger
- JAEGER_AGENT_PORT=6831
jaeger:
image: jaegertracing/all-in-one:latest
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "9411:9411"
You can also find my project on GitHub.
What am I doing wrong?
| I found the solution to my problem, in case anybody is facing similar issues.
I was missing the environment variable JAEGER_SAMPLER_MANAGER_HOST_PORT, which is necessary if the (default) remote controlled sampler is used for tracing.
This is the working docker-compose file:
version: '2'
services:
demo:
build: opentracing_demo/.
ports:
- "8080:8080"
environment:
- JAEGER_SERVICE_NAME=hello_service
- JAEGER_AGENT_HOST=jaeger
- JAEGER_AGENT_PORT=6831
- JAEGER_SAMPLER_MANAGER_HOST_PORT=jaeger:5778
jaeger:
image: jaegertracing/all-in-one:latest
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "9411:9411"
| OpenTracing | 50,173,643 | 11 |
There is an existing Spring Boot app which is using SLF4J logger. I decided to add the support of distributed tracing via standard opentracing API with Jaeger as the tracer. It is really amazing how easy the initial setup is - all that is required is just adding two dependencies to the pom.xml:
<dependency>
<groupId>io.opentracing.contrib</groupId>
<artifactId>opentracing-spring-web-autoconfigure</artifactId>
<version>${io.opentracing.version}</version>
</dependency>
<dependency>
<groupId>io.jaegertracing</groupId>
<artifactId>jaeger-core</artifactId>
<version>${jaegerVersion}</version>
</dependency>
and providing the Tracer bean with the configuration:
@Bean
public io.opentracing.Tracer getTracer() throws ConfigurationException {
return new new io.jaegertracing.Tracer.Builder("my-spring-boot-app").build();
}
All works like a charm - app requests are processed by Jaeger and spans are created:
However, in the span Logs there are only preHandle & afterCompletion events with info about the class / method that were called during request execution (no logs produced by slf4j logger are collected) :
The question is if it is possible to configure the Tracer to pickup the logs produced by the app logger (slf4j in my case) so that all the application logs done via: LOG.info / LOG.warn / LOG.error etc. would be also reflected in Jaeger
NOTE: I have figured out how to log to span manually via opentracing API e.g.:
Scope scope = tracer.scopeManager().active();
if (scope != null) {
scope.span().log("...");
}
And do some manual manipulations with the ERROR tag for exception processing in filters e.g.
} catch(Exception ex) {
Tags.ERROR.set(span, true);
span.log(Map.of(Fields.EVENT, "error", Fields.ERROR_OBJECT, ex, Fields.MESSAGE, ex.getMessage()));
throw ex
}
But, I'm still wondering if it is possible to configure the tracer to pickup the application logs automatically:
LOG.info -> tracer adds new log to the active span
LOG.error -> tracer adds new log to the active span plus adds ERROR tag
UPDATE: I was able to add the application logs to the tracer by adding wrapper for the logger e.g.
public void error(String message, Exception e) {
Scope scope = tracer.scopeManager().active();
if (scope != null) {
Span span = scope.span();
Tags.ERROR.set(span, true);
span.log(Map.of(Fields.EVENT, "error", Fields.ERROR_OBJECT, e, Fields.MESSAGE, e.getMessage()));
}
LOG.error(message, e);
}
However, so far I was not able to find opentracing configuration options that would allow to add the application logs to the tracer automatically by default. Basically, it seems that it is expected that dev would add extra logs to tracer programmatically if needed. Also, after investigating tracing more it appeared to be that normally logging and tracing are handled separately and adding all the application logs to the tracer is not a good idea (tracer should mainly include sample data and tags for request identification)
https://github.com/openzipkin/zipkin/issues/1453
https://peter.bourgon.org/blog/2016/02/07/logging-v-instrumentation.html
| https://github.com/opentracing-contrib/java-spring-cloud project automatically sends standard logging to the active span. Just add the following dependency to your pom.xml
<dependency>
<groupId>io.opentracing.contrib</groupId>
<artifactId>opentracing-spring-cloud-starter</artifactId>
</dependency>
Or use this https://github.com/opentracing-contrib/java-spring-cloud/tree/master/instrument-starters/opentracing-spring-cloud-core starter if you want only logging integration.
| OpenTracing | 50,855,480 | 11 |
The W3C trace context defines the traceparent and tracestate headers for enabling distributed tracing.
My question(s) is then
How is it different from OpenTracing.
If W3C has already defined usage of the headers, then is opentracing using some other headers?
| OpenTracing, by design, did not define a format for propagating tracing headers. It was the responsibility of libraries who implemented OpenTracing to provide their own format for serialization/de-serialization of the span context. This was mostly an effort to be as broadly compatible as possible. Generally, you'll find three different popular header formats for OpenTracing - Zipkin (B3-*), Jaeger (uber-*), and the OpenTracing 'sample' headers (ot-*), although some vendors have started to add W3C TraceContext as well.
OpenTelemetry has chosen to adopt W3C TraceContext as one of it's core propagation formats (in addition to Zipkin's B3 format) which should alleviate this problem in the future.
| OpenTracing | 62,304,436 | 11 |
I set up a hbase cluster to store data from opentsdb. Recently due to reboot of some of the nodes, hbase lost the table "tsdb". I can still it on hbase's master node page, but when I click on it, it gives me a tableNotFoundException
org.apache.hadoop.hbase.TableNotFoundException: tsdb
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:952)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:818)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:782)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:249)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:213)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:171)
......
I entered hbase shell, trying to locate 'tsdb' table, but got the similar message
hbase(main):018:0> scan 'tsdb'
ROW COLUMN+CELL
ERROR: Unknown table tsdb!
However when I tried to re-create this table, hbase shell told me the table already exist...
hbase(main):013:0> create 'tsdb', {NAME => 't', VERSIONS => 1, BLOOMFILTER=>'ROW'}
ERROR: Table already exists: tsdb!
And I can also list the table in hbase shell
hbase(main):001:0> list
TABLE
tsdb
tsdb-uid
2 row(s) in 0.6730 seconds
Taking a look at the log, I found this which should be the cause of my issue
2012-05-14 12:06:22,140 WARN org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Encountered problems when prefetch META table:
org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for table: tsdb, row=tsdb,,99999999999999
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:157)
at org.apache.hadoop.hbase.client.MetaScanner.access$000(MetaScanner.java:52)
at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:130)
at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:127)
It says cannot find row of tsbb in .META., but there are indeed tsdb rows in .META.
hbase(main):002:0> scan '.META.'
ROW COLUMN+CELL
tsdb,\x00\x00\x0FO\xA2\xF1\xD0\x00\x00\x01\x00\x00\x0E\x00\ column=info:regioninfo, timestamp=1336311752799, value={NAME => 'tsdb,\x00\x00\x0FO\xA2\xF1\xD0\x00\x00\x01\x00\x00\x0E\x00\x00\x02\x00\x00\x12\x00\x00\x03\x00\x00\x13\x00\x00\x
x00\x02\x00\x00\x12\x00\x00\x03\x00\x00\x13\x00\x00\x05\x00 05\x00\x001,1336311752340.7cd0d2205d9ae5fcadf843972ec74ec5.', STARTKEY => '\x00\x00\x0FO\xA2\xF1\xD0\x00\x00\x01\x00\x00\x0E\x00\x00\x02\x00\x00\x12\x00\x00\x03\x00\x00\x13\x00\
\x001,1336311752340.7cd0d2205d9ae5fcadf843972ec74ec5. x00\x05\x00\x001', ENDKEY => '\x00\x00\x10O\xA3\x8C\x80\x00\x00\x01\x00\x00\x0B\x00\x00\x02\x00\x00\x19\x00\x00\x03\x00\x00\x1A\x00\x00\x05\x00\x001', ENCODED => 7cd0d2205d9ae5f
cadf843972ec74ec5,}
tsdb,\x00\x00\x0FO\xA2\xF1\xD0\x00\x00\x01\x00\x00\x0E\x00\ column=info:server, timestamp=1337011527000, value=brycobapd01.usnycbt.amrs.bankofamerica.com:60020
x00\x02\x00\x00\x12\x00\x00\x03\x00\x00\x13\x00\x00\x05\x00
\x001,1336311752340.7cd0d2205d9ae5fcadf843972ec74ec5.
tsdb,\x00\x00\x0FO\xA2\xF1\xD0\x00\x00\x01\x00\x00\x0E\x00\ column=info:serverstartcode, timestamp=1337011527000, value=1337011518948
......
tsdb-uid,,1336081042372.a30d8074431c6a31c6a0a30e61fedefa. column=info:server, timestamp=1337011527458, value=bry200163111d.usnycbt.amrs.bankofamerica.com:60020
tsdb-uid,,1336081042372.a30d8074431c6a31c6a0a30e61fedefa. column=info:serverstartcode, timestamp=1337011527458, value=1337011519807
6 row(s) in 0.2950 seconds
Here is the result after I ran "hbck" on the cluster
ERROR: Region hdfs://slave-node-1:9000/hbase/tsdb/249438af5657bf1881a837c23997747e on HDFS, but not listed in META or deployed on any region server
ERROR: Region hdfs://slave-node-1:9000/hbase/tsdb/4f8c65fb72910870690b94848879db1c on HDFS, but not listed in META or deployed on any region server
ERROR: Region hdfs://slave-node-1:9000/hbase/tsdb/63276708b4ac9f11e241aca8b56e9def on HDFS, but not listed in META or deployed on any region server
ERROR: Region hdfs://slave-node-1:9000/hbase/tsdb/e54ee4def67d7f3b6dba75a3430e0544 on HDFS, but not listed in META or deployed on any region server
ERROR: (region tsdb,\x00\x00\x0FO\xA2\xF1\xD0\x00\x00\x01\x00\x00\x0E\x00\x00\x02\x00\x00\x12\x00\x00\x03\x00\x00\x13\x00\x00\x05\x00\x001,1336311752340.7cd0d2205d9ae5fcadf843972ec74ec5.) First region should start with an empty key. You need to create a new region and regioninfo in HDFS to plug the hole.
ERROR: Found inconsistency in table tsdb
Summary:
-ROOT- is okay.
Number of regions: 1
Deployed on: master-node,60020,1337011518948
.META. is okay.
Number of regions: 1
Deployed on: slave-node-2,60020,1337011519845
Table tsdb is inconsistent.
Number of regions: 5
Deployed on: slave-node-2,60020,1337011519845 slave-node-1,60020,1337011519807 master-node,60020,1337011518948
tsdb-uid is okay.
Number of regions: 1
Deployed on: slave-node-1,60020,1337011519807
5 inconsistencies detected.
Status: INCONSISTENT
I have run
bin/hbase hbck -fix
which unfortunately does not solve my problem
Could someone help me out on this that
Is it possible to recover this table "tsdb"?
If 1 cannot be done, is it a suggested way to gracefully remove 'tsdb', and create a new one?
I'd be greatly appreciated if anybody can let me know what is the most suggested way to reboot a node? Currently, I am leaving my master node always up. For other nodes, I run this command immediately after its reboot.
command:
# start data node
bin/hadoop-daemon.sh start datanode
bin/hadoop-daemon.sh start jobtracker
# start hbase
bin/hbase-daemon.sh start zookeeper
bin/hbase-daemon.sh start regionserver
Many Thanks!
| A bit late, maybe it's helpful to the searcher.
Run the ZooKeeper shell hbase zkcli
In the shell run ls /hbase/table
Run rmr /hbase/table/TABLE_NAME
Restart Hbase
| OpenTSDB | 10,586,246 | 18 |
I am trying to install OpenTSDB on Ubuntu, and I am following this documentation. But after running these commands:
git clone git://github.com/OpenTSDB/opentsdb.git
cd opentsdb
running this commanding is giving the following console output:
./build.sh
Console Output:
seed-admin@seedadmin-Inspiron-3847:~/Abharthan/opentsdb$ sudo ./build.sh
+ test -f configure
+ ./bootstrap
./bootstrap: 17: exec: autoreconf: not found
What is the problem?
| sudo apt-get install autoconf solved my problem
| OpenTSDB | 32,255,654 | 17 |
We are trying to use HBase to store time-series data. The model we have currently stores the time-series as versions within a cell. This implies that the cell could end up storing millions of versions, and the queries on this time-series would retrieve a range of versions using the setTimeRange method available on the Get class in HBase.
e.g.
{
"row1" : {
"columnFamily1" : {
"column1" : {
1 : "1",
2 : "2"
},
"column2" : {
1 : "1"
}
}
}
}
Is this a reasonable model to store time-series data in HBase?
Is the alternate model of storing data in multiple columns (is it possible to query across columns) or rows more suitable?
| I don't think you should use versioning to store the time series here. Not because it won't work, but because it's not designed for that particular use case and there are other ways.
I suggest you store the time series as the time step as the column qualifier and the value will be the data itself. Something like:
{
"row1" : {
"columnFamily1" : {
"col1-000001" : "1"
"col1-000002" : "2"
"col1-000003" : "91"
"col2-000001" : "31"
}
}
}
}
One nice thing here is that HBase stores the column qualifiers in sorted order, so when reading the time series back you should see the items in order.
Another realistic option would be to have the identifier for the record as the first part of the rowkey, but then have the time step in the rowkey as well. Something like:
{
"fooseries-00001" : {
"columnFamily1" : {
"val" : "1"
}
}
}
"fooseries-00002" : {
"columnFamily1" : {
"val" : "2"
}
}
}
}
This has the nice feature that it'll be pretty easy to do range scans in a particular series. For example, pulling out fooseries's steps 104 to 199 is going to be pretty trivial to implement and be efficient.
The downside to this one is deleting an entire series is going to require a bit more management and synchronization. Another downside is that MapReduce analytics are going to have a hard time doing any sort of analysis on this data. With the above approach, the entire time series will be passed to one map() call, while here, map() will be called for each frame.
| OpenTSDB | 4,126,259 | 15 |
I used OpenTSDB over HBase (pseudo-distributed Hadoop on virtual box) to send data at very high load (~ 50,000 records / s). The system worked properly for a while but it went down suddenly. I terminated OpenTSDB and HBase. Unfortunately, I could never bring them up again. Every time I tried to run HBase and OpenTSDB, they showed error logs. Here I list the logs:
regionserver:
2015-07-01 18:15:30,752 INFO [sync.3] wal.FSHLog: Slow sync cost: 112 ms, current pipeline: [192.168.56.101:50010]
2015-07-01 18:15:41,277 INFO [regionserver/node1.vmcluster/192.168.56.101:16201.logRoller] wal.FSHLog: Rolled WAL /hbase/WALs/node1.vmcluster,16201,1435738612093/node1.vmcluster%2C16201%2C1435738612093.default.1435742101122 with entries=3841, filesize=123.61 MB; new WAL /hbase/WALs/node1.vmcluster,16201,1435738612093/node1.vmcluster%2C16201%2C1435738612093.default.1435742141109
2015-07-01 18:15:41,278 INFO [regionserver/node1.vmcluster/192.168.56.101:16201.logRoller] wal.FSHLog: Archiving hdfs://node1.vmcluster:9000/hbase/WALs/node1.vmcluster,16201,1435738612093/node1.vmcluster%2C16201%2C1435738612093.default.1435742061805 to hdfs://node1.vmcluster:9000/hbase/oldWALs/node1.vmcluster%2C16201%2C1435738612093.default.1435742061805
2015-07-01 18:15:42,249 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for tsdb,,1435740133573.1a692e2668a2b4a71aaf2805f9b00a72., current region memstore size 132.20 MB
2015-07-01 18:15:42,381 INFO [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for tsdb,,1435740133573.1a692e2668a2b4a71aaf2805f9b00a72., current region memstore size 133.09 MB
2015-07-01 18:15:42,382 WARN [MemStoreFlusher.1] regionserver.DefaultMemStore: Snapshot called again without clearing previous. Doing nothing. Another ongoing flush or did we fail last attempt?
2015-07-01 18:15:42,391 FATAL [MemStoreFlusher.0] regionserver.HRegionServer: ABORTING region server node1.vmcluster,16201,1435738612093: Replay of WAL required. Forcing server shutdown
org.apache.hadoop.hbase.DroppedSnapshotException: region: tsdb,,1435740133573.1a692e2668a2b4a71aaf2805f9b00a72.
at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2001)
at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1772)
at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1704)
at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:445)
at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:407)
at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:69)
at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:225)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NegativeArraySizeException
at org.apache.hadoop.hbase.CellComparator.getMinimumMidpointArray(CellComparator.java:494)
at org.apache.hadoop.hbase.CellComparator.getMidpoint(CellComparator.java:448)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishBlock(HFileWriterV2.java:165)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.checkBlockBoundary(HFileWriterV2.java:146)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:263)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:932)
at org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:121)
at org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:71)
at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:879)
at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2128)
at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1955)
... 7 more
2015-07-01 18:15:42,398 FATAL [MemStoreFlusher.0] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint]
Master:
this is the same to one on regionserver:
2015-07-01 18:15:42,596 ERROR [B.defaultRpcServer.handler=15,queue=0,port=16020] master.MasterRpcServices: Region server node1.vmcluster,16201,1435738612093 reported a fatal error:
ABORTING region server node1.vmcluster,16201,1435738612093: Replay of WAL required. Forcing server shutdown
Cause:
org.apache.hadoop.hbase.DroppedSnapshotException: region: tsdb,,1435740133573.1a692e2668a2b4a71aaf2805f9b00a72.
at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2001)
at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1772)
at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1704)
at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:445)
at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:407)
at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:69)
at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:225)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NegativeArraySizeException
at org.apache.hadoop.hbase.CellComparator.getMinimumMidpointArray(CellComparator.java:494)
at org.apache.hadoop.hbase.CellComparator.getMidpoint(CellComparator.java:448)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishBlock(HFileWriterV2.java:165)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.checkBlockBoundary(HFileWriterV2.java:146)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:263)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:932)
at org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:121)
at org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:71)
at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:879)
at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2128)
at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1955)
... 7 more
and this:
2015-07-01 18:17:16,971 INFO [node1.vmcluster,16020,1435738420751.splitLogManagerTimeoutMonitor] master.SplitLogManager: total tasks = 1 unassigned = 1 tasks={/hbase/splitWAL/WALs%2Fnode1.vmcluster%2C16201%2C1435738612093-splitting%2Fnode1.vmcluster%252C16201%252C1435738612093..meta.1435738616764.meta=last_update = -1 last_version = -1 cur_worker_name = null status = in_progress incarnation = 0 resubmits = 0 batch = installed = 1 done = 0 error = 0}
2015-07-01 18:17:21,976 INFO [node1.vmcluster,16020,1435738420751.splitLogManagerTimeoutMonitor] master.SplitLogManager: total tasks = 1 unassigned = 1 tasks={/hbase/splitWAL/WALs%2Fnode1.vmcluster%2C16201%2C1435738612093-splitting%2Fnode1.vmcluster%252C16201%252C1435738612093..meta.1435738616764.meta=last_update = -1 last_version = -1 cur_worker_name = null status = in_progress incarnation = 0 resubmits = 0 batch = installed = 1 done = 0 error = 0}
2015-07-01 18:17:26,979 INFO [node1.vmcluster,16020,1435738420751.splitLogManagerTimeoutMonitor] master.SplitLogManager: total tasks = 1 unassigned = 1 tasks={/hbase/splitWAL/WALs%2Fnode1.vmcluster%2C16201%2C1435738612093-splitting%2Fnode1.vmcluster%252C16201%252C1435738612093..meta.1435738616764.meta=last_update = -1 last_version = -1 cur_worker_name = null status = in_progress incarnation = 0 resubmits = 0 batch = installed = 1 done = 0 error = 0}
2015-07-01 18:17:31,983 INFO [node1.vmcluster,16020,1435738420751.splitLogManagerTimeoutMonitor] master.SplitLogManager: total tasks = 1 unassigned = 1 tasks={/hbase/splitWAL/WALs%2Fnode1.vmcluster%2C16201%2C1435738612093-splitting%2Fnode1.vmcluster%252C16201%252C1435738612093..meta.1435738616764.meta=last_update = -1 last_version = -1 cur_worker_name = null status = in_progress incarnation = 0 resubmits = 0 batch = installed = 1 done = 0 error = 0}
2015-07-01 18:17:36,985 INFO [node1.vmcluster,16020,1435738420751.splitLogManagerTimeoutMonitor] master.SplitLogManager: total tasks = 1 unassigned = 1 tasks={/hbase/splitWAL/WALs%2Fnode1.vmcluster%2C16201%2C1435738612093-splitting%2Fnode1.vmcluster%252C16201%252C1435738612093..meta.1435738616764.meta=last_update = -1 last_version = -1 cur_worker_name = null status = in_progress incarnation = 0 resubmits = 0 batch = installed = 1 done = 0 error = 0}
2015-07-01 18:17:41,992 INFO [node1.vmcluster,16020,1435738420751.splitLogManagerTimeoutMonitor] master.SplitLogManager: total tasks = 1 unassigned = 1 tasks={/hbase/splitWAL/WALs%2Fnode1.vmcluster%2C16201%2C1435738612093-splitting%2Fnode1.vmcluster%252C16201%252C1435738612093..meta.1435738616764.meta=last_update = -1 last_version = -1 cur_worker_name = null status = in_progress incarnation = 0 resubmits = 0 batch = installed = 1 done = 0 error = 0}
2015-07-01 18:17:45,283 WARN [CatalogJanitor-node1:16020] master.CatalogJanitor: Failed scan of catalog table
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=351, exceptions:
Wed Jul 01 18:17:45 KST 2015, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68275: row '' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=node1.vmcluster,16201,1435738612093, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:264)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:215)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:288)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:267)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:139)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:134)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:823)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:187)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
at org.apache.hadoop.hbase.master.CatalogJanitor.getMergedRegionsAndSplitParents(CatalogJanitor.java:169)
at org.apache.hadoop.hbase.master.CatalogJanitor.getMergedRegionsAndSplitParents(CatalogJanitor.java:121)
at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:222)
at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:103)
at org.apache.hadoop.hbase.Chore.run(Chore.java:80)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68275: row '' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=node1.vmcluster,16201,1435738612093, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:310)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:291)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:403)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:709)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:880)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:849)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1173)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:31889)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:344)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:188)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 6 more
2015-07-01 18:17:46,995 INFO [node1.vmcluster,16020,1435738420751.splitLogManagerTimeoutMonitor] master.SplitLogManager: total tasks = 1 unassigned = 1 tasks={/hbase/splitWAL/WALs%2Fnode1.vmcluster%2C16201%2C1435738612093-splitting%2Fnode1.vmcluster%252C16201%252C1435738612093..meta.1435738616764.meta=last_update = -1 last_version = -1 cur_worker_name = null status = in_progress incarnation = 0 resubmits = 0 batch = installed = 1 done = 0 error = 0}
2015-07-01 18:17:52,002 INFO [node1.vmcluster,16020,1435738420751.splitLogManagerTimeoutMonitor] master.SplitLogManager: total tasks = 1 unassigned = 1 tasks={/hbase/splitWAL/WALs%2Fnode1.vmcluster%2C16201%2C1435738612093-splitting%2Fnode1.vmcluster%252C16201%252C1435738612093..meta.1435738616764.meta=last_update = -1 last_version = -1 cur_worker_name = null status = in_progress incarnation = 0 resubmits = 0 batch = installed = 1 done = 0 error = 0}
2015-07-01 18:17:57,004 INFO [node1.vmcluster,16020,1435738420751.splitLogManagerTimeoutMonitor] master.SplitLogManager: total tasks = 1 unassigned = 1 tasks={/hbase/splitWAL/WALs%2Fnode1.vmcluster%2C16201%2C1435738612093-splitting%2Fnode1.vmcluster%252C16201%252C1435738612093..meta.1435738616764.meta=last_update = -1 last_version = -1 cur_worker_name = null status = in_progress incarnation = 0 resubmits = 0 batch = installed = 1 done = 0 error = 0}
2015-07-01 18:18:02,006 INFO [node1.vmcluster,16020,1435738420751.splitLogManagerTimeoutMonitor] master.SplitLogManager: total tasks = 1 unassigned = 1 tasks={/hbase/splitWAL/WALs%2Fnode1.vmcluster%2C16201%2C1435738612093-splitting%2Fnode1.vmcluster%252C16201%252C1435738612093..meta.1435738616764.meta=last_update = -1 last_version = -1 cur_worker_name = null status = in_progress incarnation = 0 resubmits = 0 batch = installed = 1 done = 0 error = 0}
2015-07-01 18:18:07,011 INFO [node1.vmcluster,16020,1435738420751.splitLogManagerTimeoutMonitor] master.SplitLogManager: total tasks = 1 unassigned = 1 tasks={/hbase/splitWAL/WALs%2Fnode1.vmcluster%2C16201%2C1435738612093-splitting%2Fnode1.vmcluster%252C16201%252C1435738612093..meta.1435738616764.meta=last_update = -1 last_version = -1 cur_worker_name = null status = in_progress incarnation = 0 resubmits = 0 batch = installed = 1 done = 0 error = 0}
After that, when I restarted hbase, regionserver showed this error:
2015-07-02 09:17:49,151 INFO [RS_OPEN_REGION-node1:16201-0] regionserver.HRegion: Replaying edits from hdfs://node1.vmcluster:9000/hbase/data/default/tsdb/1a692e2668a2b4a71aaf2805f9b00a72/recovered.edits/0000000000000169657
2015-07-02 09:17:49,343 INFO [RS_OPEN_REGION-node1:16201-0] regionserver.HRegion: Started memstore flush for tsdb,,1435740133573.1a692e2668a2b4a71aaf2805f9b00a72., current region memstore size 132.20 MB; wal is null, using passed sequenceid=169615
2015-07-02 09:17:49,428 ERROR [RS_OPEN_REGION-node1:16201-0] handler.OpenRegionHandler: Failed open of region=tsdb,,1435740133573.1a692e2668a2b4a71aaf2805f9b00a72., starting to roll back the global memstore size.
org.apache.hadoop.hbase.DroppedSnapshotException: region: tsdb,,1435740133573.1a692e2668a2b4a71aaf2805f9b00a72.
at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2001)
at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:3694)
at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:3499)
at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:889)
at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:769)
at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:742)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4921)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4887)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4858)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4814)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4765)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:356)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:126)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NegativeArraySizeException
at org.apache.hadoop.hbase.CellComparator.getMinimumMidpointArray(CellComparator.java:490)
at org.apache.hadoop.hbase.CellComparator.getMidpoint(CellComparator.java:448)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishBlock(HFileWriterV2.java:165)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.checkBlockBoundary(HFileWriterV2.java:146)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:263)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:932)
at org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:121)
at org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:71)
at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:879)
at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2128)
at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1955)
... 16 more
2015-07-02 09:17:49,430 INFO [RS_OPEN_REGION-node1:16201-0] coordination.ZkOpenRegionCoordination: Opening of region {ENCODED => 1a692e2668a2b4a71aaf2805f9b00a72, NAME => 'tsdb,,1435740133573.1a692e2668a2b4a71aaf2805f9b00a72.', STARTKEY => '', ENDKEY => '\x00\x15\x08M|kp\x00\x00\x01\x00\x00\x01'} failed, transitioning from OPENING to FAILED_OPEN in ZK, expecting version 1
2015-07-02 09:17:49,443 INFO [PriorityRpcServer.handler=9,queue=1,port=16201] regionserver.RSRpcServices: Open tsdb,,1435740133573.1a692e2668a2b4a71aaf2805f9b00a72.
2015-07-02 09:17:49,458 INFO [StoreOpener-1a692e2668a2b4a71aaf2805f9b00a72-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=3, currentSize=533360, freeSize=509808592, maxSize=510341952, heapSize=533360, minSize=484824864, minFactor=0.95, multiSize=242412432, multiFactor=0.5, singleSize=121206216, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2015-07-02 09:17:49,458 INFO [StoreOpener-1a692e2668a2b4a71aaf2805f9b00a72-1] compactions.CompactionConfiguration: size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2015-07-02 09:17:49,519 INFO [RS_OPEN_REGION-node1:16201-2] regionserver.HRegion: Replaying edits from hdfs://node1.vmcluster:9000/hbase/data/default/tsdb/1a692e2668a2b4a71aaf2805f9b00a72/recovered.edits/0000000000000169567
Update
Today I did the test at very light traffic: 1000 records / sec for 2000 seconds, the problem came again.
| According to HBASE-13329, a short type overflow has occurred with "short diffIdx = 0" in hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparator.java. A patch has been available today (2015-07-06). In the patch, HBase developers change the declaration from "short diffIdx = 0" to "int diffIdx = 0".
I did tests with the patch, it worked properly.
| OpenTSDB | 31,164,505 | 12 |
I'm using Qt4 to post some data points to a OpenTSDB server, which doesn't supports chunked HTTP requests.
The code is basically this:
QNetworkRequest request(m_url);
request.setHeader(QNetworkRequest::ContentTypeHeader, QString("application/json"));
request.setHeader(QNetworkRequest::ContentLengthHeader, jsonRequest.toAscii().size());
m_networkAccessManager.post(request, jsonRequest.toAscii());
jsonRequest is a QString containing the data points. This code is called from time to time to upload data to the server, and it usually works fine. However, sometimes I receive an error from openTSDB stating that "Chunked request not supported.".
This seems to happen when the request gets a little bigger (and by bigger, I mean some KB of data).
edit:
I've done a tcpdump of the request when the problem arises, and in fact it doesn't seen to be chunked:
POST /api/put HTTP/1.1
Content-Type: application/json
Content-Length: 14073
Connection: Keep-Alive
Accept-Encoding: gzip
Accept-Language: en,*
User-Agent: Mozilla/5.0
Host: 192.168.xx.xxx:xxxx
[{"metric":"slt.reader.temperature","timestamp":1420736269427,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736280628,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736291637,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736302748,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736313840,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736325011,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736336039,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736347182,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736358210,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736369372,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736380401,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tags_read","timestamp":1420736385286,"value":0,"tags":{"sltId":"5036","readerId":"1","antenna":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tags_read","timestamp":1420736385286,"value":10,"tags":{"sltId":"5036","readerId":"1","antenna":"2","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tags_read","timestamp":1420736385286,"value":7,"tags":{"sltId":"5036","readerId":"1","antenna":"3","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tags_read","timestamp":1420736385287,"value":6,"tags":{"sltId":"5036","readerId":"1","antenna":"4","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tag_transactions","timestamp":1420736385287,"value":13,"tags":{"sltId":"5036","readerId":"1","antenna":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tag_transactions","timestamp":1420736385287,"value":99,"tags":{"sltId":"5036","readerId":"1","antenna":"2","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tag_transactions","timestamp":1420736385287,"value":102,"tags":{"sltId":"5036","readerId":"1","antenna":"3","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tag_transactions","timestamp":1420736385287,"value":93,"tags":{"sltId":"5036","readerId":"1","antenna":"4","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.transactionsNotDeciphered","timestamp":1420736385287,"value":0,"tags":{"sltId":"5036","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736391436,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736402608,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736413642,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736424676,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736435823,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736446850,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736458007,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736469060,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736480207,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736491418,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736502620,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736513638,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736524682,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736535712,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736546742,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736557834,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736568858,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736579932,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736590966,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736601993,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736613183,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736624357,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736635387,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736646414,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736657493,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736668624,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736679743,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tags_read","timestamp":1420736685286,"value":0,"tags":{"sltId":"5036","readerId":"1","antenna":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tags_read","timestamp":1420736685286,"value":8,"tags":{"sltId":"5036","readerId":"1","antenna":"2","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tags_read","timestamp":1420736685286,"value":9,"tags":{"sltId":"5036","readerId":"1","antenna":"3","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tags_read","timestamp":1420736685295,"value":5,"tags":{"sltId":"5036","readerId":"1","antenna":"4","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tag_transactions","timestamp":1420736685295,"value":4,"tags":{"sltId":"5036","readerId":"1","antenna":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tag_transactions","timestamp":1420736685295,"value":88,"tags":{"sltId":"5036","readerId":"1","antenna":"2","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tag_transactions","timestamp":1420736685295,"value":130,"tags":{"sltId":"5036","readerId":"1","antenna":"3","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tag_transactions","timestamp":1420736685296,"value":123,"tags":{"sltId":"5036","readerId":"1","antenna":"4","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.transactionsNotDeciphered","timestamp":1420736685296,"value":0,"tags":{"sltId":"5036","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736690786,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736701910,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736712968,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736723999,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736735075,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736746106,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736757266,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736768455,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736779473,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736790606,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736801633,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736812713,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736823740,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736834856,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736845958,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736857103,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736868216,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736879292,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736890320,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736901503,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736912608,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736923761,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736934850,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736946033,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736957061,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736968223,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736979256,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tags_read","timestamp":1420736985284,"value":0,"tags":{"sltId":"5036","readerId":"1","antenna":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tags_read","timestamp":1420736985285,"value":16,"tags":{"sltId":"5036","readerId":"1","antenna":"2","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tags_read","timestamp":1420736985285,"value":9,"tags":{"sltId":"5036","readerId":"1","antenna":"3","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tags_read","timestamp":1420736985285,"value":11,"tags":{"sltId":"5036","readerId":"1","antenna":"4","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tag_transactions","timestamp":1420736985285,"value":9,"tags":{"sltId":"5036","readerId":"1","antenna":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tag_transactions","timestamp":1420736985285,"value":162,"tags":{"sltId":"5036","readerId":"1","antenna":"2","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tag_transactions","timestamp":1420736985285,"value":166,"tags":{"sltId":"5036","readerId":"1","antenna":"3","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.tag_transactions","timestamp":1420736985285,"value":157,"tags":{"sltId":"5036","readerId":"1","antenna":"4","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.transactionsNotDeciphered","timestamp":1420736985286,"value":0,"tags":{"sltId":"5036","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420736990353,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420737001532,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420737012658,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420737023691,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420737034823,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420737045906,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420737056942,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}},{"metric":"slt.reader.temperature","timestamp":1420737068032,"value":56,"tags":{"sltId":"5036","readerId":"1","host":"xxxxxxxxxxxxxxxxx"}}]HTTP/1.1 400 Bad Request
Content-Length: 1080
Content-Type: text/html; charset=UTF-8
Date: Thu, 08 Jan 2015 17:18:43 GMT
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"><html><head><meta http-equiv=content-type content="text/html;charset=utf-8"><title>Bad Request</title>
<style><!--
body{font-family:arial,sans-serif;margin-left:2em}A.l:link{color:#6f6f6f}A.u:link{color:green}.subg{background-color:#e2f4f7}.fwf{font-family:monospace;white-space:pre-wrap}//--></style></head>
<body text=#000000 bgcolor=#ffffff><table border=0 cellpadding=2 cellspacing=0 width=100%><tr><td rowspan=3 width=1% nowrap><b><font color=#c71a32 size=10>T</font><font color=#00a189 size=10>S</font><font color=#1a65b7 size=10>D</font> </b><td> </td></tr><tr><td class=subg><font color=#507e9b><b>Looks like it's your fault this time</b></td></tr><tr><td> </td></tr></table><blockquote><h1>Bad Request</h1>Sorry but your request was rejected as being invalid.<br/><br/>The reason provided was:<blockquote>Chunked request not supported.</blockquote></blockquote><table width=100% cellpadding=0 cellspacing=0><tr><td class=subg><img alt="" width=1 height=6></td></tr></table></body></html>
I was thinking Qt was changing to use a chunked request when the request got larger, but it's not the case. So the current question is: what is a chunked request in openTSDB?
| I've finally found the reason it thinks my request is chunked: OpenTSDB internally uses Netty for networking, and if Netty reads a block that doesn't contain the complete request, then it's flagged as chunked, even if there's no Transfer-Encoding header in the request.
There are two possible solutions for this:
adding "tsd.http.request.enable_chunked = true" to the config file, so that it doesn't reject chunked requests and adding "tsd.http.request.max_chunk = " to the config file, and setting it to a sensible size for your requests
If you can change the client, force it to send smaller requests
Please note that enabling chunked requests for public facing servers may not be a good idea, as a malicious client can send unbounded requests, overloading the server.
Credits goes to irc users jaylr and manolama, who provided me the info on irc channel #opentsdb@freenode.net
| OpenTSDB | 27,841,071 | 10 |
I'm building a one-off smart-home data collection box. It's expected to run on a raspberry-pi-class machine (~1G RAM), handling about 200K data points per day (each a 64-bit int). We've been working with vanilla MySQL, but performance is starting to crumble, especially for queries on the number of entries in a given time interval.
As I understand it, this is basically exactly what time-series databases are designed for. If anything, the unusual thing about my situation is that the volume is relatively low, and so is the amount of RAM available.
A quick look at Wikipedia suggests OpenTSDB, InfluxDB, and possibly BlueFlood. OpenTSDB suggests 4G of RAM, though that may be for high-volume settings. InfluxDB actually mentions sensor readings, but I can't find a lot of information on what kind of resources are required.
Okay, so here's my actual question: are there obvious red flags that would make any of these systems inappropriate for the project I describe?
I realize that this is an invitation to flame, so I'm counting on folks to keep it on the bright and helpful side. Many thanks in advance!
| InfluxDB should be fine with 1 GB RAM at that volume. Embedded sensors and low-power devices like Raspberry Pi's are definitely a core use case, although we haven't done much testing with the latest betas beyond compiling on ARM.
InfluxDB 0.9.0 was just released, and 0.9.x should be available in our Hosted environment in a few weeks. The low end instances have 1 GB RAM and 1 CPU equivalent, so they are a reasonable proxy for your Pi performance, and the free trial lasts two weeks.
If you have more specific questions, please reach out to us at influxdb@googlegroups.com or support@influxdb.com and we'll see hwo we can help.
| OpenTSDB | 30,930,390 | 10 |
In the following regex what does "(?i)" and "?@" mean?
(?i)<.*?@(?P<domain>\w+\.\w+)(?=>)
I know that "?" means zero or one and that i sets case insensitivity.
This regex captures domains from an email address in a mailto field, but does not include the @ sign. It was generated the erex command from within SPLUNK 6.0.2
| demo here : https://regex101.com/r/hE9gB4/1
(?i)<.*?@(?P<domain>\w+\.\w+)(?=>)
its actually getting your domain name from the email id:
(?i) makes it match case insensitive and
?@ is nothing but @ which matches the character @ literally.
the ? in your ?@ is part of .*? which we call as a lazy operator, It will give you the text between the < and @
if you dont use the ? after the .* it will match everything after < to the end. ( we call this as the greedy operator)
| Splunk | 22,961,535 | 36 |
I'm pushing my logs to a local splunk installation. Recently I found that the following error repeats a lot (about once every minute):
Error L10
(output buffer overflow): 7150 messages dropped since
2013-06-26T19:19:52+00:00.134 <13>1 2013-07-08T14:59:47.162084+00:00
host app web.1 - [\x1B[37minfo\x1B[0m] application - Perf - it took 31
milliseconds to fetch row IDs ...
The errors repeat quite a lot, and in the documentation it is said that these errors happen when your application produces a lot of logs.
Thing is, I barely have 20-30 logs per second, which isn't really considered a lot. I tested with other drains (added the built-in papertrail plugin), and these errors do not happen there - so they are specific to the outgoing splunk drain.
I thought maybe the splunk machine was loaded and thus not accepting logs fast enough, but its CPU is idle, and it has plenty of disk & memory.
Also, I believe the app (Play 2 app) is auto-flushing logs to console all the time, so there is no big buildup of unflushed logs followed by a release.
What can cause a slow drain speed for the outgoing splunk drain? How should I debug it?
| After a long ping-pong with the Heroku team, we found the answer:
I used the URL prefix http:// when configuring the log drain, instead of syslog://. When I changed the URL to syslog://, the error went away, and logs are correctly flowing through splunks.
| Splunk | 17,532,337 | 14 |
I'm using the Splunk HttpEventCollectorLogbackAppender to automatically send application logs to Splunk. I've been trying to set the host, source, and sourcetype but am not having any luck getting them sent to Splunk.
Is it possible to set the host, source, or sourcetype using the Splunk HttpEventCollectorLogbackAppender and if so, how do I do it?
I've been trying to send JSON and it doesn't seem to be working.
Here's the documentation that tells you what options are available and it says that they need to be passed as a query string, but since i'm using the out of the box Splunk appender i'm not sure how to set those.
http://dev.splunk.com/view/event-collector/SP-CAAAE6P
Splunk logback appender:
...
<!-- SPLUNK appender -->
<appender name="SPLUNK" class="com.splunk.logging.HttpEventCollectorLogbackAppender">
<url>http://myurl:8088</url>
<token>mytoken</token>
<disableCertificateValidation>true</disableCertificateValidation>
<batch_size_count>1</batch_size_count>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%logger: %msg%n</pattern>
</layout>
</appender>
<root level="INFO">
<appender-ref ref="SPLUNK"/>
</root>
...
Example log line
Logger logger = LoggerFactory.getLogger(MyClass.class);
logger.debug("I'm logging debug stuff");
| Any setters on HttpEventCollectorLogbackAppender can be added to your logback configuration.
So to invoke setHost, setSource and setSourcetype you add them to your logback configuration like this:
<appender name="SPLUNK" class="com.splunk.logging.HttpEventCollectorLogbackAppender">
<url>http://myurl:8088</url>
<host>x</host>
<source>y</source>
<sourcetype>z</sourcetype>
<token>mytoken</token>
<disableCertificateValidation>true</disableCertificateValidation>
<batch_size_count>1</batch_size_count>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%logger: %msg%n</pattern>
</layout>
</appender>
| Splunk | 41,005,325 | 10 |
I need to ship my cloudwatch logs to a log analysis service.
I've followed along with these articles here and here and got it working by hand, no worries.
Now I'm trying to automate all this with Terraform (roles/policies, security groups, cloudwatch log group, lambda, and triggering the lambda from the log group).
But I can't figure out how to use TF to configure AWS to trigger the lambda from the cloudwatch logs.
I can link the two TF resources together by hand by doing the following (in the Lambda web console UI):
go into the lambda function's "Triggers" section
click "Add Trigger"
select "cloudwatch logs" from the list of trigger types
select the log group I want to trigger the lambda
enter a filter name
leave the filter pattern empty (implying trigger on all log streams)
make sure "enable trigger" is selected
click the submit button
Once that's done, the lambda shows up on the cloudwatch logs console in the subscriptions column - displays as "Lambda (cloudwatch-sumologic-lambda)".
I tried to create the subscription with the following TF resource:
resource "aws_cloudwatch_log_subscription_filter" "cloudwatch-sumologic-lambda-subscription" {
name = "cloudwatch-sumologic-lambda-subscription"
role_arn = "${aws_iam_role.jordi-waf-cloudwatch-lambda-role.arn}"
log_group_name = "${aws_cloudwatch_log_group.jordi-waf-int-app-loggroup.name}"
filter_pattern = "logtype test"
destination_arn = "${aws_lambda_function.cloudwatch-sumologic-lambda.arn}"
}
But it fails with:
aws_cloudwatch_log_subscription_filter.cloudwatch-sumologic-lambda-subscription: InvalidParameterException: destinationArn for vendor lambda cannot be used with roleArn
I found this answer about setting up a similar thing for a scheduled event, but that doesn't seem to be equivalent to what the console actions I described above do (the console UI method doesn't create an event/rule that I can see).
Can someone give me a pointer on what I'm doing wrong please?
| I had the aws_cloudwatch_log_subscription_filter resource defined incorrectly - you should not provide the role_arn argument in this situation.
You also need to add an aws_lambda_permission resource (with a depends_on relationship defined on the filter or TF may do it in the wrong order).
Note that the AWS lambda console UI adds the lambda permission for you invisibly, so beware that the aws_cloudwatch_log_subscription_filter will work without the permission resource if you happen to have done the same action before in the console UI.
The necessary TF config looks like this (the last two resources are the relevant ones for configuring the actual cloudwatch->lambda trigger):
// intended for application logs (access logs, modsec, etc.)
resource "aws_cloudwatch_log_group" "test-app-loggroup" {
name = "test-app"
retention_in_days = 90
}
resource "aws_security_group" "cloudwatch-sumologic-lambda-sg" {
name = "cloudwatch-sumologic-lambda-sg"
tags {
Name = "cloudwatch-sumologic-lambda-sg"
}
description = "Security group for lambda to move logs from CWL to SumoLogic"
vpc_id = "${aws_vpc.dev-vpc.id}"
}
resource "aws_security_group_rule" "https-egress-cloudwatch-sumologic-to-internet" {
type = "egress"
from_port = 443
to_port = 443
protocol = "tcp"
security_group_id = "${aws_security_group.cloudwatch-sumologic-lambda-sg.id}"
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_iam_role" "test-cloudwatch-lambda-role" {
name = "test-cloudwatch-lambda-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
}
resource "aws_iam_role_policy" "test-cloudwatch-lambda-policy" {
name = "test-cloudwatch-lambda-policy"
role = "${aws_iam_role.test-cloudwatch-lambda-role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CopiedFromTemplateAWSLambdaVPCAccessExecutionRole1",
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface"
],
"Resource": "*"
},
{
"Sid": "CopiedFromTemplateAWSLambdaVPCAccessExecutionRole2",
"Effect": "Allow",
"Action": [
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface"
],
"Resource": "arn:aws:ec2:ap-southeast-2:${var.dev_vpc_account_id}:network-interface/*"
},
{
"Sid": "CopiedFromTemplateAWSLambdaBasicExecutionRole1",
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:ap-southeast-2:${var.dev_vpc_account_id}:*"
},
{
"Sid": "CopiedFromTemplateAWSLambdaBasicExecutionRole2",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:ap-southeast-2:${var.dev_vpc_account_id}:log-group:/aws/lambda/*"
]
},
{
"Sid": "CopiedFromTemplateAWSLambdaAMIExecutionRole",
"Effect": "Allow",
"Action": [
"ec2:DescribeImages"
],
"Resource": "*"
}
]
}
EOF
}
resource "aws_lambda_function" "cloudwatch-sumologic-lambda" {
function_name = "cloudwatch-sumologic-lambda"
filename = "${var.lambda_dir}/cloudwatchSumologicLambda.zip"
source_code_hash = "${base64sha256(file("${var.lambda_dir}/cloudwatchSumologicLambda.zip"))}"
handler = "cloudwatchSumologic.handler"
role = "${aws_iam_role.test-cloudwatch-lambda-role.arn}"
memory_size = "128"
runtime = "nodejs4.3"
// set low because I'm concerned about cost-blowout in the case of mis-configuration
timeout = "15"
vpc_config = {
subnet_ids = ["${aws_subnet.dev-private-subnet.id}"]
security_group_ids = ["${aws_security_group.cloudwatch-sumologic-lambda-sg.id}"]
}
}
resource "aws_lambda_permission" "test-app-allow-cloudwatch" {
statement_id = "test-app-allow-cloudwatch"
action = "lambda:InvokeFunction"
function_name = "${aws_lambda_function.cloudwatch-sumologic-lambda.arn}"
principal = "logs.ap-southeast-2.amazonaws.com"
source_arn = "${aws_cloudwatch_log_group.test-app-loggroup.arn}"
}
resource "aws_cloudwatch_log_subscription_filter" "test-app-cloudwatch-sumologic-lambda-subscription" {
depends_on = ["aws_lambda_permission.test-app-allow-cloudwatch"]
name = "cloudwatch-sumologic-lambda-subscription"
log_group_name = "${aws_cloudwatch_log_group.test-app-loggroup.name}"
filter_pattern = ""
destination_arn = "${aws_lambda_function.cloudwatch-sumologic-lambda.arn}"
}
EDIT: Please note that the above TF code was written years ago, using version 0.11.x - it should still work but there may be better ways of doing things. Specifically, don't use an inline policy like this unless needed, use an aws_iam_policy_document instead - they're just way easier to maintain over time.
| Sumo Logic | 38,407,660 | 40 |
Is it possible to query for a distinct/unique count of a field using Kibana? I am using elastic search as my backend to Kibana.
If so, what is the syntax of the query? Heres a link to the Kibana interface I would like to make my query: http://demo.kibana.org/#/dashboard
I am parsing nginx access logs with logstash and storing the data into elastic search. Then, I use Kibana to run queries and visualize my data in charts. Specifically, I want to know the count of unique IP addresses for a specific time frame using Kibana.
| For Kibana 4 go to this answer
This is easy to do with a terms panel:
If you want to select the count of distinct IP that are in your logs, you should specify in the field clientip, you should put a big enough number in length (otherwise, it will join different IP under the same group) and specify in the style table. After adding the panel, you will have a table with IP, and the count of that IP:
| Logstash | 19,102,220 | 78 |
I am building a proof of concept using Elasticsearch Logstash and Kibana for one of my projects. I have the dashboard with the graphs working without any issue. One of the requirements for my project is the ability to download the file(csv/excel).
In kibana the only option i saw for downloading the file is by clicking on edit button on the visualization created. Is it possible to add a link on the dashboard that would allow users to download the file without going into the edit mode. And secondly I would like to disable/hide the edit mode for anyone other than me who views the dashboard.
Thanks
| FYI : How to download data in CSV from Kibana:
In Kibana-->
1. Go to 'Discover' in left side
Select Index Field (based on your dashboard data)
(*** In case if you are not sure which index to select-->go to management tab-->Saved Objects-->Dashboard-->select dashboard name-->scroll down to JSON-->you will see the Index name )
left side you see all the variables available in the data-->click over the variable name that you want to have in csv-->click add-->this variable will be added on the right side of the columns avaliable
Top right section of the kibana-->there is the time filter-->click
-->select the duration for which you want the csv
Top upper right -->Reporting-->save this time/variable selection with a new report-->click generate CSV
Go to 'Management' in left side--> 'Reporting'-->download your csv
| Logstash | 34,792,146 | 74 |
I am using ELK to create dashboards from my log files. I have a log file with entries that contain an id value and a "success"/"failure" value, displaying whether an operation with a given id succeeded or failed. Each operation/id can fail an unlimited number of times and succeed at most once. In my Kibana dashboard I want to display the count of log entries with a "failure" value for each operation id, but I want to filter out cases where a "success" log entry for the id exists. i.e. I am only interested in operations that never succeeded. Any hints for tricks that would achieve this?
| This is easy in Kibana 5 search bar. Just add a filter
!(_exists_:"your_variable")
you can toggle the filter or write the inverse query as
_exists_:"your_variable"
In Kibana 4 and Kibana 3 you can use this query which is now deprecated
_missing_:"your_variable"
NOTE: In Elasticsearch 7.x, Kibana now has a pull down to select KQL or Lucene style queries in the search bar. Be mindful that syntax such as _exists_:FIELD is a Lucene syntax and you need to set the pulldown accordingly.
| Logstash | 27,537,521 | 68 |
In one of my project, I am planning to use ElasticSearch with MySQL.
I have successfully installed ElasticSearch. I am able to manage index in ES separately. but I don't know how to implement the same with MySQL.
I have read a couple of documents but I am a bit confused and not having a clear idea.
| As of ES 5.x , they have given this feature out of the box with logstash plugin.
This will periodically import data from database and push to ES server.
One has to create a simple import file given below (which is also described here) and use logstash to run the script. Logstash supports running this script on a schedule.
# file: contacts-index-logstash.conf
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/mydb"
jdbc_user => "user"
jdbc_password => "pswd"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "/path/to/latest/mysql-connector-java-jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from contacts where updatedAt > :sql_last_value"
}
}
output {
elasticsearch {
protocol => http
index => "contacts"
document_type => "contact"
document_id => "%{id}"
host => "ES_NODE_HOST"
}
}
# "* * * * *" -> run every minute
# sql_last_value is a built in parameter whose value is set to Thursday, 1 January 1970,
# or 0 if use_column_value is true and tracking_column is set
You can download the mysql jar from maven here.
In case indexes do not exist in ES when this script is executed, they will be created automatically. Just like a normal post call to elasticsearch
| Logstash | 36,152,152 | 66 |
What are the main differences between Graylog2 and Kibana?
We already use Graylog2 but I must admit I don't really like the UI.
Just wonder in case it may be helpful to switch to Kibana.
| At my company we started with Graylog2 and recently installed Kibana3.
My personal opinion is that Kibana3 is more suited towards non-dev, while Graylog isn't.
Kibana:
Pretty dashboards
Graphs, charts and images
"panel" customization, adding parallel coordinate graphs for example
Easy/flexible management of dashboards (they save directly into their own ES index)
Easy deployment (just clone the Kibana3 repo and serve it with your fav. web server)
Graylog2
Much simpler interface
Plain log "analysis" and good search capabilities
Built in authentication and user permissions
Built in alert mechanisms for your chosen streams
We still have Graylog2 running in parallel to Kibana3, but I don't think it will last for much longer. Kibana3 provides most, if not all, of the capabilities (that we needed), and on top of that it allows management friendly interfaces.
| Logstash | 17,210,184 | 46 |
I have a field that contains numbers. I want a filter that shows all logs that are less than a constant value.
When I try to add a new query filter, all I can see is a query string option.
| If you are talking about the query field a syntax like this works:
field:<10
Will find just records with a field value less than 10. Found this by experimentation one day -- don't know if it's documented anywhere.
| Logstash | 26,303,899 | 43 |
Ultimately I want to have a scalable search solution for the data in PostgreSql. My finding points me towards using Logstash to ship write events from Postgres to ElasticSearch, however I have not found a usable solution. The soluions I have found involve using jdbc-input to query all data from Postgres on an interval, and the delete events are not captured.
I think this is a common use case so I hope you guys could share with me your experience, or give me some pointers to proceed.
| If you need to also be notified on DELETEs and delete the respective record in Elasticsearch, it is true that the Logstash jdbc input will not help. You'd have to use a solution working around the binlog as suggested here
However, if you still want to use the Logstash jdbc input, what you could do is simply soft-delete records in PostgreSQL, i.e. create a new BOOLEAN column in order to mark your records as deleted. The same flag would then exist in Elasticsearch and you can exclude them from your searches with a simple term query on the deleted field.
Whenever you need to perform some cleanup, you can delete all records flagged deleted in both PostgreSQL and Elasticsearch.
| Logstash | 35,813,923 | 40 |
I have the many of my logs indexed in logstash-Year-Week format. That is if i want to delete indices older than a few weeks, how can I achieve that in elasticsearch. Is there an easy, seamless way to do that?
| Curator would be an ideal match here.
You can find the link here - https://github.com/elastic/curator
A command like below should work just fine -
curator --host <IP> delete indices --older-than 30 --prefix "twitter-" --time-unit days --timestring '%Y-%m-%d'
You can keep in this in the CRON for removing the indices occasionally.
You can find some examples and docs here - https://www.elastic.co/guide/en/elasticsearch/client/curator/current/examples.html
| Logstash | 33,430,055 | 39 |
I have web application backed end in NodeJS and logstash/elasticsearch/kibana to handle system logs like (access_error.log, messages.log etc).
Right now I need to record all JavaScript client side errors into kibana also. What is the best way to do this?
EDIT: I have to add additional information to this question. As @Jackie Xu provide partial solution to my problem and as follows from my comment:
I'm most interested in realizing server-side error handling. I think it's not effective write each error into file. I'm looking for best practices how to make it more performance.
I need to handle js error records on server-side more effective than just write into file. May you provide some scenarios how could I increase server-side logging performance?
| When you say client, I'm assuming here that you mean a logging client and not a web client.
First, make it a habit to log your errors in a common format. Logstash likes consistency, so if you're putting text and JSON in the same output log, you will run into issues. Hint: log in JSON. It's awesome and incredibly flexible.
The overall process will go like this:
Error occurs in your app
Log the error to file, socket, or over a network
Tell logstash how to get (input) that error (i.e. from file, listen over network, etc)
Tell logstash to send (output) the error to Elasticsearch (which can be running on the same machine)
In your app, try using the bunyan logger for node. https://github.com/trentm/node-bunyan
node app index.js
var bunyan = require('bunyan');
var log = bunyan.createLogger({
name: 'myapp',
streams: [{
level: 'info',
stream: process.stdout // log INFO and above to stdout
}, {
level: 'error',
path: '/var/log/myapp-error.log' // log ERROR and above to a file
}]
});
// Log stuff like this
log.info({status: 'started'}, 'foo bar message');
// Also, in express you can catch all errors like this
app.use(function(err, req, res, next) {
log.error(err);
res.send(500, 'An error occurred');
});
Then you need to configure logstash to read those JSON log files and send to Elasticsearch/Kibana. Make a file called myapp.conf and try the following:
logstash config myapp.conf
# Input can read from many places, but here we're just reading the app error log
input {
file {
type => "my-app"
path => [ "/var/log/myapp/*.log" ]
codec => "json"
}
}
# Output can go many places, here we send to elasticsearch (pick one below)
output {
elasticsearch {
# Do this if elasticsearch is running somewhere else
host => "your.elasticsearch.hostname"
# Do this if elasticsearch is running on the same machine
host => "localhost"
# Do this if you want to run an embedded elastic search in logstash
embedded => true
}
}
Then start/restart logstash as such: bin/logstash agent -f myapp.conf web
Go to elasticsearch on http://your-elasticsearch-host:9292 to see the logs coming in.
| Logstash | 24,502,190 | 38 |
We started using Serilog in combination with Elasticsearch, and it's a very efficient way to store structure log data (and later visualize them using tools like Kibana). However, I see the advantage of not writing log data directly to the backend but instead configure a log broker such as Logstash that can take responsibility for adding tags to log messages, selecting indexes etc. With this setup applications won't need to have knowledge of log data distribution.
With Logstash in the middle the question is what Serilog sink is best to use so Logstash can import its data without applying advanced and CPU-intensive filters. I've seen Redis mentioned as a good companion to Logstash, but Serilog doesn't have a Redis sink. Any recommendations for Serilog sink which data can be easily transferred by Logstash to an Elasticsearch index?
There is even an approach to use Elasticsearch sink first and then loopback it to Elasticsearch again after some arrangements and applying extra tags.
| The accepted answer was written before the sink Serilog.Sinks.Http existed.
Instead of logging to file and having Filebeat monitoring it, one could have the HTTP sink post log events to the Logstash HTTP input plugin. This would mean fewer moving parts on the instances where the logs where created.
| Logstash | 25,283,749 | 38 |
In my system, the insertion of data is always done through csv files via logstash. I never pre-define the mapping. But whenever I input a string it is always taken to be analyzed, as a result an entry like hello I am Sinha is split into hello,I,am,Sinha. Is there anyway I could change the default/dynamic mapping of elasticsearch so that all strings, irrespective of index, irrespective of type are taken to be not analyzed? Or is there a way of setting it in the .conf file? Say my conf file looks like
input {
file {
path => "/home/sagnik/work/logstash-1.4.2/bin/promosms_dec15.csv"
type => "promosms_dec15"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
columns => ["Comm_Plan","Queue_Booking","Order_Reference","Multi_Ordertype"]
separator => ","
}
ruby {
code => "event['Generation_Date'] = Date.parse(event['Generation_Date']);"
}
}
output {
elasticsearch {
action => "index"
host => "localhost"
index => "promosms-%{+dd.MM.YYYY}"
workers => 1
}
}
I want all the strings to be not analyzed and I don't mind it being the default setting for all future data to be inserted into elasticsearch either
| Just create a template. run
curl -XPUT localhost:9200/_template/template_1 -d '{
"template": "*",
"settings": {
"index.refresh_interval": "5s"
},
"mappings": {
"_default_": {
"_all": {
"enabled": true
},
"dynamic_templates": [
{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"index": "not_analyzed",
"omit_norms": true,
"type": "string"
}
}
}
],
"properties": {
"@version": {
"type": "string",
"index": "not_analyzed"
},
"geoip": {
"type": "object",
"dynamic": true,
"path": "full",
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}
}
}'
| Logstash | 27,483,302 | 37 |
Does logstash use its own file syntax in config file? Is there any parser or validator for config file syntax?
For anyone that does not use logstash but have idea about file formats here is a sample syntax:
input {
file {
path => "/var/log/messages"
type => "syslog"
}
file {
path => "/var/log/apache/access.log"
type => "apache"
}
}
| The Logstash configuration file is a custom format developed by the Logstash folks using Treetop.
The grammar itself is described in the source file grammar.treetop and compiled using Treetop into the custom grammar.rb parser.
That parser is then used by the pipeline.rb file in order to set up the pipeline from the Logstash configuration.
If you're not that much into Ruby, there's another interesting project called node-logstash which provides a Logstash implementation in Node.js. The configuration format is exactly the same as with the official Logstash, though the parser is obviously a different one written for Node.js. In this project, the Logstash configuration file grammar is described in jison and the parser is also automatically generated, but could be used by any Node.js module simply by requiring that generated parser.
| Logstash | 21,442,715 | 36 |
I am trying to run ElasticSearch with Kibana in Windows 2008 R2.
I followed this article: Install-logstash-on-a-windows-server-with-kibana
Step by step, but all I get is:
Connection Failed
Possibility #1: Your elasticsearch server is down or unreachable
This can be caused by a network outage, or a failure of the Elasticsearch process. If you have recently run a query that required a terms facet to be executed it is possible the process has run out of memory and stopped. Be sure to check your Elasticsearch logs for any sign of memory pressure.
Possibility #2: You are running Elasticsearch 1.4 or higher
Elasticsearch 1.4 ships with a security setting that prevents Kibana from connecting. You will need to set http.cors.allow-origin in your elasticsearch.yml to the correct protocol, hostname, and port (if not 80) that your access Kibana from. Note that if you are running Kibana in a sub-url, you should exclude the sub-url path and only include the protocol, hostname and port. For example, http://mycompany.com:8080, not http://mycompany.com:8080/kibana.
Click back, or the home button, when you have resolved the connection issue
When I go to
http://XXX.XXX.XXX.XXX:9200/
I get:
{
"status" : 200,
"name" : "Benazir Kaur",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "1.4.0",
"build_hash" : "bc94bd81298f81c656893ab1ddddd30a99356066",
"build_timestamp" : "2014-11-05T14:26:12Z",
"build_snapshot" : false,
"lucene_version" : "4.10.2"
},
"tagline" : "You Know, for Search"
}
So it seems that the ElasticSearch is running, but for some reason the Kibana cannot connect to it.
The ElasticSearch logs contains an error:
[2014-11-08 13:02:41,474][INFO ][node ] [Virako] version[1.4.0], pid[5556], build[bc94bd8/2014-11-05T14:26:12Z]
[2014-11-08 13:02:41,490][INFO ][node ] [Virako] initializing ...
[2014-11-08 13:02:41,490][INFO ][plugins ] [Virako] loaded [], sites []
[2014-11-08 13:02:46,872][INFO ][node ] [Virako] initialized
[2014-11-08 13:02:46,872][INFO ][node ] [Virako] starting ...
[2014-11-08 13:02:47,402][INFO ][transport ] [Virako] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.0.14:9300]}
[2014-11-08 13:02:47,558][INFO ][discovery ] [Virako] elasticsearch/XyAjXnofTnG1CXgDoHrNsA
[2014-11-08 13:02:51,412][INFO ][cluster.service ] [Virako] new_master [Virako][XyAjXnofTnG1CXgDoHrNsA][test04][inet[/192.168.0.14:9300]], reason: zen-disco-join (elected_as_master)
[2014-11-08 13:02:51,521][INFO ][gateway ] [Virako] recovered [0] indices into cluster_state
[2014-11-08 13:02:51,552][INFO ][http ] [Virako] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.0.14:9200]}
[2014-11-08 13:02:51,552][INFO ][node ] [Virako] started
[2014-11-08 13:11:04,781][WARN ][transport.netty ] [Virako] exception caught on transport layer [[id: 0x3984a6b4, /192.168.0.14:58237 => /192.168.0.14:9300]], closing connection
java.io.StreamCorruptedException: invalid internal transport message format, got (47,45,54,20)
at org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:47)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Any idea what am I doing wrong?
| I have faced similar kind of issue.
If you are using elasticsearch-1.4 with Kibana-3 then add following parameters in elasticsearch.yml file
http.cors.allow-origin: "/.*/"
http.cors.enabled: true
Reference,
https://gist.github.com/rmoff/379e6ce46eb128110f38
| Logstash | 26,828,099 | 36 |
I have a remote machine that combines multiline events and sends them across the lumberjack protocol.
What comes in is something that looks like this:
{
"message" => "2014-10-20T20:52:56.133+0000 host 2014-10-20 15:52:56,036 [ERROR ][app.logic ] Failed to turn message into JSON\nTraceback (most recent call last):\n File \"somefile.py", line 249, in _get_values\n return r.json()\n File \"/path/to/env/lib/python3.4/site-packages/requests/models.py\", line 793, in json\n return json.loads(self.text, **kwargs)\n File \"/usr/local/lib/python3.4/json/__init__.py\", line 318, in loads\n return _default_decoder.decode(s)\n File \"/usr/local/lib/python3.4/json/decoder.py\", line 343, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File \"/usr/local/lib/python3.4/json/decoder.py\", line 361, in raw_decode\n raise ValueError(errmsg(\"Expecting value\", s, err.value)) from None\nValueError: Expecting value: line 1 column 1 (char 0), Failed to turn message into JSON"
}
When I try to match the message with
grok {
match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} \[%LOGLEVEL:loglevel}%{ SPACE}\]\[%{NOTSPACE:module}%{SPACE}\]%{GREEDYDATA:message}" ]
}
the GREEDYDATA is not nearly as greedy as I would like.
So then I tried to use gsub:
mutate {
gsub => ["message", "\n", "LINE_BREAK"]
}
# Grok goes here
mutate {
gsub => ["message", "LINE_BREAK", "\n"]
}
but that one didn't work rather than
The Quick brown fox
jumps over the lazy
groks
I got
The Quick brown fox\njumps over the lazy\ngroks
So...
How do I either add the newline back to my data, make the GREEDYDATA match my newlines, or in some other way grab the relevant portion of my message?
| All GREEDYDATA is is .*, but . doesn't match newline, so you can replace %{GREEDYDATA:message} with (?<message>(.|\r|\n)*)and get it to be truly greedy.
| Logstash | 26,474,873 | 34 |
I'm trying to parse a logfile using grok
Each line of the logfile has fields separated by commas:
13,home,ABC,Get,,Private, Public,1.2.3 ecc...
I'm using match like this:
match => [ "message", "%{NUMBER:requestId},%{WORD:ServerHost},%{WORD:Service},...
My question is: Can I allow optional field?
At times some of the fileds might be empty ,,
Is there a pattern that matches a string like this 2.3.5 ?
( a kind of version number )
| At it's base, grok is based on regular expressions, so you can surround a pattern with ()? to make it optional -- for example (%{NUMBER:requestId})?,
If there isn't a grok pattern that suits your needs, you can always create a named extraction like this: (?<version>[\d\.]+) which would extract into version, a string that has any number of digits and dots in it.
| Logstash | 30,083,719 | 32 |
I am exploring ELK stack and coming across an issue.
I have generated logs, forwarded the logs to logstash, logs are in JSON format so they are pushed directly into ES with only JSON filter in Logstash config, connected and started Kibana pointing to the ES.
Logstash Config:
filter {
json {
source => "message"
}
Now I have indexes created for each day's log and Kibana happily shows all of the logs from all indexes.
My issue is: there are many fields in logs which are not enabled/indexed for filtering in Kibana. When I try to add them to the filer in Kibana, it says "unindexed fields cannot be searched".
Note: these are not sys/apache log. There are custom logs in JSON format.
Log format:
{"message":"ResponseDetails","@version":"1","@timestamp":"2015-05-23T03:18:51.782Z","type":"myGateway","file":"/tmp/myGatewayy.logstash","host":"localhost","offset":"1072","data":"text/javascript","statusCode":200,"correlationId":"a017db4ebf411edd3a79c6f86a3c0c2f","docType":"myGateway","level":"info","timestamp":"2015-05-23T03:15:58.796Z"}
fields like 'statusCode', 'correlationId' are not getting indexed. Any reason why?
Do I need to give a Mapping file to ES to ask it to index either all or given fields?
| You've updated the Kibana field list?
Kibana.
Settings.
Reload field list.
Newer version:
Kibana.
Management.
Refresh icon on the top right.
| Logstash | 30,471,859 | 31 |
Is there any way in logstash to use a conditional to check if a specific tag exists?
For example,
grok {
match => [
"message", "Some expression to match|%{GREEDYDATA:NOMATCHES}"
]
if NOMATCHES exists Do something.
How do I verify if NOMATCHES tag exists or not?
Thanks.
| Just so we're clear: the config snippet you provided is setting a field, not a tag.
Logstash events can be thought of as a dictionary of fields. A field named tags is referenced by many plugins via add_tag and remove_tag operations.
You can check if a tag is set:
if "foo" in [tags] {
...
}
But you seem to want to check if a field contains anything:
if [NOMATCHES] =~ /.+/ {
...
}
The above will check that NOMATCHES exists and isn't empty.
Reference: configuration file overview.
| Logstash | 21,438,697 | 29 |
Background:
I have a custom generated log file that has the following pattern :
[2014-03-02 17:34:20] - 127.0.0.1|ERROR| E:\xampp\htdocs\test.php|123|subject|The error message goes here ; array (
'create' =>
array (
'key1' => 'value1',
'key2' => 'value2',
'key3' => 'value3'
),
)
[2014-03-02 17:34:20] - 127.0.0.1|DEBUG| flush_multi_line
The second entry [2014-03-02 17:34:20] - 127.0.0.1|DEBUG| flush_multi_line Is a dummy line, just to let logstash know that the multi line event is over, this line is dropped later on.
My config file is the following :
input {
stdin{}
}
filter{
multiline{
pattern => "^\["
what => "previous"
negate=> true
}
grok{
match => ['message',"\[.+\] - %{IP:ip}\|%{LOGLEVEL:loglevel}"]
}
if [loglevel] == "DEBUG"{ # the event flush line
drop{}
}else if [loglevel] == "ERROR" { # the first line of multievent
grok{
match => ['message',".+\|.+\| %{PATH:file}\|%{NUMBER:line}\|%{WORD:tag}\|%{GREEDYDATA:content}"]
}
}else{ # its a new line (from the multi line event)
mutate{
replace => ["content", "%{content} %{message}"] # Supposing each new line will override the message field
}
}
}
output {
stdout{ debug=>true }
}
The output for content field is : The error message goes here ; array (
Problem:
My problem is that I want to store the rest of the multiline to content field :
The error message goes here ; array (
'create' =>
array (
'key1' => 'value1',
'key2' => 'value2',
'key3' => 'value3'
),
)
So i can remove the message field later.
The @message field contains the whole multiline event so I tried the mutate filter, with the replace function on that, but I'm just unable to get it working :( .
I don't understand the Multiline filter's way of working, if someone could shed some light on this, it would be really appreciated.
Thanks,
Abdou.
| I went through the source code and found out that :
The multiline filter will cancel all the events that are considered to be a follow up of a pending event, then append that line to the original message field, meaning any filters that are after the multiline filter won't apply in this case
The only event that will ever pass the filter, is one that is considered to be a new one ( something that start with [ in my case )
Here is the working code :
input {
stdin{}
}
filter{
if "|ERROR|" in [message]{ #if this is the 1st message in many lines message
grok{
match => ['message',"\[.+\] - %{IP:ip}\|%{LOGLEVEL:loglevel}\| %{PATH:file}\|%{NUMBER:line}\|%{WORD:tag}\|%{GREEDYDATA:content}"]
}
mutate {
replace => [ "message", "%{content}" ] #replace the message field with the content field ( so it auto append later in it )
remove_field => ["content"] # we no longer need this field
}
}
multiline{ #Nothing will pass this filter unless it is a new event ( new [2014-03-02 1.... )
pattern => "^\["
what => "previous"
negate=> true
}
if "|DEBUG| flush_multi_line" in [message]{
drop{} # We don't need the dummy line so drop it
}
}
output {
stdout{ debug=>true }
}
Cheers,
Abdou
| Logstash | 22,172,879 | 28 |
I'm using Logstash + Elasticsearch + Kibana to have an overview of my Tomcat log files.
For each log entry I need to know the name of the file from which it came. I'd like to add it as a field. Is there a way to do it?
I've googled a little and I've only found this SO question, but the answer is no longer up-to-date.
So far the only solution I see is to specify separate configuration for each possible file name with different "add_field" like so:
input {
file {
type => "catalinalog"
path => [ "/path/to/my/files/catalina**" ]
add_field => { "server" => "prod1" }
}
}
But then I need to reconfigure logstash each time there is a new possible file name.
Any better ideas?
| Hi I added a grok filter to do just this. I only wanted to have the filename not the path, but you can change this to your needs.
filter {
grok {
match => ["path","%{GREEDYDATA}/%{GREEDYDATA:filename}\.log"]
}
}
| Logstash | 22,916,200 | 28 |
Well, after looking around quite a lot, I could not find a solution to my problem, as it "should" work, but obviously doesn't.
I'm using on a Ubuntu 14.04 LTS machine Logstash 1.4.2-1-2-2c0f5a1, and I am receiving messages such as the following one:
2014-08-05 10:21:13,618 [17] INFO Class.Type - This is a log message from the class:
BTW, I am also multiline
In the input configuration, I do have a multiline codec and the event is parsed correctly. I also separate the event text in several parts so that it is easier to read.
In the end, I obtain, as seen in Kibana, something like the following (JSON view):
{
"_index": "logstash-2014.08.06",
"_type": "customType",
"_id": "PRtj-EiUTZK3HWAm5RiMwA",
"_score": null,
"_source": {
"@timestamp": "2014-08-06T08:51:21.160Z",
"@version": "1",
"tags": [
"multiline"
],
"type": "utg-su",
"host": "ubuntu-14",
"path": "/mnt/folder/thisIsTheLogFile.log",
"logTimestamp": "2014-08-05;10:21:13.618",
"logThreadId": "17",
"logLevel": "INFO",
"logMessage": "Class.Type - This is a log message from the class:\r\n BTW, I am also multiline\r"
},
"sort": [
"21",
1407315081160
]
}
You may have noticed that I put a ";" in the timestamp. The reason is that I want to be able to sort the logs using the timestamp string, and apparently logstash is not that good at that (e.g.: http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/multi-fields.html).
I have unsuccessfull tried to use the date filter in multiple ways, and it apparently did not work.
date {
locale => "en"
match => ["logTimestamp", "YYYY-MM-dd;HH:mm:ss.SSS", "ISO8601"]
timezone => "Europe/Vienna"
target => "@timestamp"
add_field => { "debug" => "timestampMatched"}
}
Since I read that the Joda library may have problems if the string is not strictly ISO 8601-compliant (very picky and expects a T, see https://logstash.jira.com/browse/LOGSTASH-180), I also tried to use mutate to convert the string to something like 2014-08-05T10:21:13.618 and then use "YYYY-MM-dd'T'HH:mm:ss.SSS". That also did not work.
I do not want to have to manually put a +02:00 on the time because that would give problems with daylight saving.
In any of these cases, the event goes to elasticsearch, but date does apparently nothing, as @timestamp and logTimestamp are different and no debug field is added.
Any idea how I could make the logTime strings properly sortable? I focused on converting them to a proper timestamp, but any other solution would also be welcome.
As you can see below:
When sorting over @timestamp, elasticsearch can do it properly, but since this is not the "real" log timestamp, but rather when the logstash event was read, I need (obviously) to be able to sort also over logTimestamp. This is what then is output. Obviously not that useful:
Any help is welcome! Just let me know if I forgot some information that may be useful.
Update:
Here is the filter config file that finally worked:
# Filters messages like this:
# 2014-08-05 10:21:13,618 [17] INFO Class.Type - This is a log message from the class:
# BTW, I am also multiline
# Take only type- events (type-componentA, type-componentB, etc)
filter {
# You cannot write an "if" outside of the filter!
if "type-" in [type] {
grok {
# Parse timestamp data. We need the "(?m)" so that grok (Oniguruma internally) correctly parses multi-line events
patterns_dir => "./patterns"
match => [ "message", "(?m)%{TIMESTAMP_ISO8601:logTimestampString}[ ;]\[%{DATA:logThreadId}\][ ;]%{LOGLEVEL:logLevel}[ ;]*%{GREEDYDATA:logMessage}" ]
}
# The timestamp may have commas instead of dots. Convert so as to store everything in the same way
mutate {
gsub => [
# replace all commas with dots
"logTimestampString", ",", "."
]
}
mutate {
gsub => [
# make the logTimestamp sortable. With a space, it is not! This does not work that well, in the end
# but somehow apparently makes things easier for the date filter
"logTimestampString", " ", ";"
]
}
date {
locale => "en"
match => ["logTimestampString", "YYYY-MM-dd;HH:mm:ss.SSS"]
timezone => "Europe/Vienna"
target => "logTimestamp"
}
}
}
filter {
if "type-" in [type] {
# Remove already-parsed data
mutate {
remove_field => [ "message" ]
}
}
}
| I have tested your date filter. it works on me!
Here is my configuration
input {
stdin{}
}
filter {
date {
locale => "en"
match => ["message", "YYYY-MM-dd;HH:mm:ss.SSS"]
timezone => "Europe/Vienna"
target => "@timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
output {
stdout {
codec => "rubydebug"
}
}
And I use this input:
2014-08-01;11:00:22.123
The output is:
{
"message" => "2014-08-01;11:00:22.123",
"@version" => "1",
"@timestamp" => "2014-08-01T09:00:22.123Z",
"host" => "ABCDE",
"debug" => "timestampMatched"
}
So, please make sure that your logTimestamp has the correct value.
It is probably other problem. Or can you provide your log event and logstash configuration for more discussion. Thank you.
| Logstash | 25,156,517 | 28 |
Recently our server was rebooted without correctly shutting down the Elastic Search / Kibana. After that reboot, both applications were running but no indices were getting created anymore. I checked logstash setup in debug mode and it is sending data to Elastic Search.
now all my created windows report this error:
Oops! SearchPhaseExecutionException[Failed to execute phase [query], all shards failed]
I tried restarting Elastic Search / Kibana, and cleared some indices. I searched a lot but wasn't able to troubleshoot this correctly.
Current Cluster Health Status is RED as shown in picture.
Any help as of how to troubleshoot that is upvoted. Thank you
EDIT:
[2015-05-06 00:00:01,561][WARN ][cluster.action.shard ] [Indech] [logstash-2015.03.16][1] sending failed shard for [logstash-2015.03.16][1], node[fdSgUPDbQB2B3NQqX7MdMQ], [P], s[INITIALIZING], indexUUID [aBcfbqnNR4-AGEdIR8dVdg], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[logstash-2015.03.16][1] failed to recover shard]; nested: ElasticsearchIllegalArgumentException[No version type match [101]]; ]]
[2015-05-06 00:00:01,561][WARN ][cluster.action.shard ] [Indech] [logstash-2015.03.16][1] received shard failed for [logstash-2015.03.16][1], node[fdSgUPDbQB2B3NQqX7MdMQ], [P], s[INITIALIZING], indexUUID [aBcfbqnNR4-AGEdIR8dVdg], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[logstash-2015.03.16][1] failed to recover shard]; nested: ElasticsearchIllegalArgumentException[No version type match [101]]; ]]
[2015-05-06 00:00:02,591][WARN ][indices.cluster ] [Indech] [logstash-2015.04.21][4] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [logstash-2015.04.21][4] failed to recover shard
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:269)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: No version type match [52]
at org.elasticsearch.index.VersionType.fromValue(VersionType.java:307)
at org.elasticsearch.index.translog.Translog$Create.readFrom(Translog.java:364)
at org.elasticsearch.index.translog.TranslogStreams.readTranslogOperation(TranslogStreams.java:52)
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:241)
what concerns me in the logsis this:
[2015-05-06 15:13:48,059][DEBUG][action.search.type ] All shards failed for phase: [query]
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 8,
"number_of_data_nodes" : 1,
"active_primary_shards" : 120,
"active_shards" : 120,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 310
}
| You have many corrupt translog files, which you need to delete. You can find it in data/{clustername}/nodes/0/indices/logstash-2015.04.21/4/translog and another one in data/{clustername}/nodes/0/indices/logstash-2015.03.16/1/translog. And maybe others, but this is what I can tell from the snippet you provided. Of course, will loose what is in the translog files.
If the indices don't have the index files anymore (only _state folder exists under data/{clustername}/nodes/0/indices/[index_name]) this means there is no data in that index anymore and at this point you can delete the index. You need to reindex that data, if you still need it. If you decide to delete the indices, you need to shutdown the node and delete the index folders under data/{clustername}/nodes/0/indices that are like the one you mentioned (empty, containing just the _state folder).
| Logstash | 30,073,759 | 27 |
Is it possible to log actions of the logstash file plugin? (i.e. what files it tries to send, what errors happen, etc)
| In new version stdout format changed
stdout { codec => rubydebug }
| Logstash | 19,086,404 | 25 |
I'm trying to backfill some past Apache access log data with logstash, therefore I need the event @timestamp to be set to the date appearing in the log message. This is my current logstash configuration:
input {
tcp {
type => "access_log"
port => 9293
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
# Try to pull the timestamp from the 'timestamp' field (parsed above with
# grok). The apache time format looks like: "18/Aug/2011:05:44:34 -0700"
locale => "en"
timezone => "America/New_York"
match => { "timestamp" => "dd/MMM/yyyy:HH:mm:ss Z" }
add_tag => [ "tsmatch" ]
}
}
output {
stdout { codec => rubydebug }
}
However, the date filter doesn't seem to update the event @timestamp, even though the Apache timestamp is catched correctly and the regular expression should match it. The output data looks like this:
{
"message" => "56.116.21.231 - - [20/Nov/2013:22:47:08 -0500] \"GET /xxxx/1.305/xxxx/xxxx.zip HTTP/1.1\" 200 33002333 \"-\" \"xxxxx/3.0.3 CFNetwork/609.1.4 Darwin/13.0.0\"",
"@timestamp" => "2013-12-01T12:54:27.920Z",
"@version" => "1",
"type" => "access_log",
"host" => "0:0:0:0:0:0:0:1%0:51045",
"clientip" => "56.116.21.231",
"ident" => "-",
"auth" => "-",
"timestamp" => "20/Nov/2013:22:47:08 -0500",
"verb" => "GET",
"request" => "/xxxx/1.305/xxxx/xxxx.zip",
"httpversion" => "1.1",
"response" => "200",
"bytes" => "33002333",
"referrer" => "\"-\"",
"agent" => "\"xxxxx/3.0.3 CFNetwork/609.1.4 Darwin/13.0.0\"",
"tags" => [
[0] "tsmatch"
]
}
Any ideas on what could be wrong?
I'm using the logstash-1.2.2 flatjar.
| Ok, I found the problem, I was using the wrong syntax on the match operation:
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
NOT
match => { "timestamp" => "dd/MMM/yyyy:HH:mm:ss Z" }
| Logstash | 20,312,416 | 25 |
The syntax for a grok pattern is %{SYNTAX:SEMANTIC}. How do i generate a list of all available SYNTAX keywords ? I know that I can use the grok debugger to discover patterns from text. But is there a list which i can scan through?
| They are in GIT and included somewhere in the distribution. But it's probably just easiest to view it online:
https://github.com/elasticsearch/logstash/blob/v1.4.0/patterns/grok-patterns
| Logstash | 23,523,790 | 24 |
I have a basic Logstash -> Elasticsearch setup, and it turns out the 'message' field is not required after the logstash filter done its job - storing this raw message field to elasticsearch is only adding unnecessary data to storage imo.
Can I safely delete this field and would it cause any trouble to ES? advices or readings are welcome, thanks all.
| No, it will not cause any trouble to ES. You can delete message field if it is redundant or unused.
You can add this filter to end of the filters.
mutate
{
remove_field => [ "message" ]
}
| Logstash | 26,006,826 | 24 |
I'm doing "elastic search getting started" tutorial. Unfortunatelly this tutorial doesn't cover first step which is importing csv database into elasticsearch.
I googled to find solution but it doesn't work unfortunatelly. Here is what I want to achieve and what I have:
I have a file with data which I want to import (simplified)
id,title
10,Homer's Night Out
12,Krusty Gets Busted
I would like to import it using logstash. After research over the internet I end up with following config:
input {
file {
path => ["simpsons_episodes.csv"]
start_position => "beginning"
}
}
filter {
csv {
columns => [
"id",
"title"
]
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
action => "index"
hosts => ["127.0.0.1:9200"]
index => "simpsons"
document_type => "episode"
workers => 1
}
}
I have a trouble with specifying document type so once data is imported and I navigate to http://localhost:9200/simpsons/episode/10 I expect to see result with episode 10.
| Good job, you're almost there, you're only missing the document ID. You need to modify your elasticsearch output like this:
elasticsearch {
action => "index"
hosts => ["127.0.0.1:9200"]
index => "simpsons"
document_type => "episode"
document_id => "%{id}" <---- add this line
workers => 1
}
After this you'll be able to query episode with id 10
GET http://localhost:9200/simpsons/episode/10
| Logstash | 43,701,016 | 24 |
I have a set of dockerized applications scattered across multiple servers and trying to setup production-level centralized logging with ELK. I'm ok with the ELK part itself, but I'm a little confused about how to forward the logs to my logstashes.
I'm trying to use Filebeat, because of its loadbalance feature.
I'd also like to avoid packing Filebeat (or anything else) into all my dockers, and keep it separated, dockerized or not.
How can I proceed?
I've been trying the following. My Dockers log on stdout so with a non-dockerized Filebeat configured to read from stdin I do:
docker logs -f mycontainer | ./filebeat -e -c filebeat.yml
That appears to work at the beginning. The first logs are forwarded to my logstash. The cached one I guess. But at some point it gets stuck and keep sending the same event
Is that just a bug or am I headed in the wrong direction? What solution have you setup?
| Here's one way to forward docker logs to the ELK stack (requires docker >= 1.8 for the gelf log driver):
Start a Logstash container with the gelf input plugin to reads from gelf and outputs to an Elasticsearch host (ES_HOST:port):
docker run --rm -p 12201:12201/udp logstash \
logstash -e 'input { gelf { } } output { elasticsearch { hosts => ["ES_HOST:PORT"] } }'
Now start a Docker container and use the gelf Docker logging driver. Here's a dumb example:
docker run --log-driver=gelf --log-opt gelf-address=udp://localhost:12201 busybox \
/bin/sh -c 'while true; do echo "Hello $(date)"; sleep 1; done'
Load up Kibana and things that would've landed in docker logs are now visible. The gelf source code shows that some handy fields are generated for you (hat-tip: Christophe Labouisse): _container_id, _container_name, _image_id, _image_name, _command, _tag, _created.
If you use docker-compose (make sure to use docker-compose >= 1.5) and add the appropriate settings in docker-compose.yml after starting the logstash container:
log_driver: "gelf"
log_opt:
gelf-address: "udp://localhost:12201"
| Logstash | 33,432,983 | 22 |
I used the following piece of code to create an index in logstash.conf
output {
stdout {codec => rubydebug}
elasticsearch {
host => "localhost"
protocol => "http"
index => "trial_indexer"
}
}
To create another index i generally replace the index name with another in the above code. Is there any way of creating many indexes in the same file? I'm new to ELK.
| You can use a pattern in your index name based on the value of one of your fields. Here we use the value of the type field in order to name the index:
output {
stdout {codec => rubydebug}
elasticsearch {
host => "localhost"
protocol => "http"
index => "%{type}_indexer"
}
}
You can also use several elasticsearch outputs either to the same ES host or to different ES hosts:
output {
stdout {codec => rubydebug}
elasticsearch {
host => "localhost"
protocol => "http"
index => "trial_indexer"
}
elasticsearch {
host => "localhost"
protocol => "http"
index => "movie_indexer"
}
}
Or maybe you want to route your documents to different indices based on some variable:
output {
stdout {codec => rubydebug}
if [type] == "trial" {
elasticsearch {
host => "localhost"
protocol => "http"
index => "trial_indexer"
}
} else {
elasticsearch {
host => "localhost"
protocol => "http"
index => "movie_indexer"
}
}
}
UPDATE
The syntax has changed a little bit in Logstash 2 and 5:
output {
stdout {codec => rubydebug}
if [type] == "trial" {
elasticsearch {
hosts => "localhost:9200"
index => "trial_indexer"
}
} else {
elasticsearch {
hosts => "localhost:9200"
index => "movie_indexer"
}
}
}
| Logstash | 33,820,478 | 22 |
I am trying to find the different kinds of syntax I can give in regex type of query through kibana, but I was not able to find any information on this.
I am running logstash and elasticsearch in the backend.
Any answer or example will be helpful.
| so any regular expressions are valid in grok as well. The regular expression library is Oniguruma.
I took this from the logstash docs online.
Also from [a Google Groups post]:
Kibana is a web interface which stay in front of ElasticSearch: to understand
the query syntax you have to know something more about Apache Lucene,
which is the text search engine used by ElasticSearch.
Here's a small tutorial about the query styles you can use with Lucene and
by inheritance with your Kibana web interface:
http://www.lucenetutorial.com/lucene-query-syntax.html This link is dead, I'm not sure but this might be an adequate replacement. (I've saved it into wayback machine as it keeps dying...)
See also the official Lucene Query Syntax documentation.
| Logstash | 21,955,183 | 21 |
So, I'm building a full cloud solution using kubernetes and spring boot.
My spring boot application is deployed to a container and logs directly on the console.
As containers are ephemerals I'd like to send logs also to a remote logstash server, so that they can be processed and sent to elastic.
Normally I would install a filebeat on the server hosting my application, and I could, but isn't there any builtin method allowing me to avoid writing my log on a file before sending it?
Currently I'm using log4j but I see no problem in switching to another logger as long it has a "logbackappender".
| You can try to add logback.xml in resources folder :
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>
<configuration scan="true">
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<param name="Encoding" value="UTF-8"/>
<remoteHost>localhost</remoteHost>
<port>5000</port>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"app_name":"YourApp", "app_port": "YourPort"}</customFields>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="logstash"/>
</root>
</configuration>
Then add logstash encoder dependency :
pom.xml
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>4.11</version>
</dependency>
logstash.conf
input {
udp {
port => "5000"
type => syslog
codec => json
}
tcp {
port => "5000"
type => syslog
codec => json_lines
}
http {
port => "5001"
codec => "json"
}
}
filter {
if [type] == "syslog" {
mutate {
add_field => { "instance_name" => "%{app_name}-%{host}:%{app_port}" }
}
}
}
output {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"]
index => "logs-%{+YYYY.MM.dd}"
}
}
| Logstash | 57,399,354 | 21 |
I am writing a filebeat configuration when I am matching if a line starts with a number like 03:32:33 ( a timestamp). I am currently doing it by-
\d
But its not getting recognised, is there anything else which I should do. I am not particularly good/ have experience with regex. Help will be appreciated.
| The real problem is that filebeat does not support \d.
Replace \d by [0-9] and your regular expression will work.
I suggest you to give a look at the filebeat's Supported Patterns.
Also, be sure you've used ^, it stands for the start of the string.
| Logstash | 37,531,205 | 20 |
For optimization purposes, I am trying to cut down my total field count. However before I am going to do that I want to get an idea of how many fields I actually have. There doesn't seem to be any Information in the _stats endpoint and I can't quite figure out how the migration tool does its field count calculation.
Is there some way, either with an endpoint or by other means, to get the total field count of a specified index?
| To build a bit further upon what the other answer provided, you can get the mapping and then simply count the number of times the keyword type appears in the output, which gives the number of fields since each field needs a type:
curl -s -XGET localhost:9200/index/_mapping?pretty | grep type | wc -l
| Logstash | 40,586,020 | 20 |
We have an existing search function that involves data across multiple tables in SQL Server. This causes a heavy load on our DB, so I'm trying to find a better way to search through this data (it doesn't change very often). I have been working with Logstash and Elasticsearch for about a week using an import containing 1.2 million records. My question is essentially, "how do I update existing documents using my 'primary key'"?
CSV data file (pipe delimited) looks like this:
369|90045|123 ABC ST|LOS ANGELES|CA
368|90045|PVKA0010|LA|CA
367|90012|20000 Venice Boulvd|Los Angeles|CA
365|90045|ABC ST 123|LOS ANGELES|CA
363|90045|ADHOCTESTPROPERTY|DALES|CA
My logstash config looks like this:
input {
stdin {
type => "stdin-type"
}
file {
path => ["C:/Data/sample/*"]
start_position => "beginning"
}
}
filter {
csv {
columns => ["property_id","postal_code","address_1","city","state_code"]
separator => "|"
}
}
output {
elasticsearch {
embedded => true
index => "samples4"
index_type => "sample"
}
}
A document in elasticsearch, then looks like this:
{
"_index": "samples4",
"_type": "sample",
"_id": "64Dc0_1eQ3uSln_k-4X26A",
"_score": 1.4054651,
"_source": {
"message": [
"369|90045|123 ABC ST|LOS ANGELES|CA\r"
],
"@version": "1",
"@timestamp": "2014-02-11T22:58:38.365Z",
"host": "[host]",
"path": "C:/Data/sample/sample.csv",
"property_id": "369",
"postal_code": "90045",
"address_1": "123 ABC ST",
"city": "LOS ANGELES",
"state_code": "CA"
}
I think would like the unique ID in the _id field, to be replaced with the value of property_id. The idea is that subsequent data files would contain updates. I don't need to keep previous versions and there wouldn't be a case where we added or removed keys from a document.
The document_id setting for elasticsearch output doesn't put that field's value into _id (it just put in "property_id" and only stored/updated one document). I know I'm missing something here. Am I just taking the wrong approach?
EDIT: WORKING!
Using @rutter's suggestion, I've updated the output config to this:
output {
elasticsearch {
embedded => true
index => "samples6"
index_type => "sample"
document_id => "%{property_id}"
}
}
Now documents are updating by dropping new files into the data folder as expected. _id and property_id are the same value.
{
"_index": "samples6",
"_type": "sample",
"_id": "351",
"_score": 1,
"_source": {
"message": [
"351|90045|Easy as 123 ST|LOS ANGELES|CA\r"
],
"@version": "1",
"@timestamp": "2014-02-12T16:12:52.102Z",
"host": "TXDFWL3474",
"path": "C:/Data/sample/sample_update_3.csv",
"property_id": "351",
"postal_code": "90045",
"address_1": "Easy as 123 ST",
"city": "LOS ANGELES",
"state_code": "CA"
}
| Converting from comment:
You can overwrite a document by sending another document with the same ID... but that might be tricky with your previous data, since you'll get randomized IDs by default.
You can set an ID using the output plugin's document_id field, but it takes a literal string, not a field name. To use a field's contents, you could use an sprintf format string, such as %{property_id}.
Something like this, for example:
output {
elasticsearch {
... other settings...
document_id => "%{property_id}"
}
}
| Logstash | 21,716,002 | 19 |
I am using logstash to feed logs into ElasticSearch.
I am configuring logstash output as:
input {
file {
path => "/tmp/foo.log"
codec =>
plain {
format => "%{message}"
}
}
}
output {
elasticsearch {
#host => localhost
codec => json {}
manage_template => false
index => "4glogs"
}
}
I notice that as soon as I start logstash it creates a mapping ( logs ) in ES as below.
{
"4glogs": {
"mappings": {
"logs": {
"properties": {
"@timestamp": {
"type": "date",
"format": "dateOptionalTime"
},
"@version": {
"type": "string"
},
"message": {
"type": "string"
}
}
}
}
}
}
How can I prevent logstash from creating this mapping ?
UPDATE:
I have now resolved this error too. "object mapping for [logs] tried to parse as object, but got EOF, has a concrete value been provided to it?"
As John Petrone has stated below, once you define a mapping, you have to ensure that your documents conform to the mapping. In my case, I had defined a mapping of "type: nested" but the output from logstash was a string.
So I removed all codecs ( whether json or plain ) from my logstash config and that allowed the json document to pass through without changes.
Here is my new logstash config ( with some additional filters for multiline logs ).
input {
kafka {
zk_connect => "localhost:2181"
group_id => "logstash_group"
topic_id => "platform-logger"
reset_beginning => false
consumer_threads => 1
queue_size => 2000
consumer_id => "logstash-1"
fetch_message_max_bytes => 1048576
}
file {
path => "/tmp/foo.log"
}
}
filter {
multiline {
pattern => "^\s"
what => "previous"
}
multiline {
pattern => "[0-9]+$"
what => "previous"
}
multiline {
pattern => "^$"
what => "previous"
}
mutate{
remove_field => ["kafka"]
remove_field => ["@version"]
remove_field => ["@timestamp"]
remove_tag => ["multiline"]
}
}
output {
elasticsearch {
manage_template => false
index => "4glogs"
}
}
| You will need a mapping to store data in Elasticsearch and to search on it - that's how ES knows how to index and search those content types. You can either let logstash create it dynamically or you can prevent it from doing so and instead create it manually.
Keep in mind you cannot change existing mappings (although you can add to them). So first off you will need to delete the existing index. You would then modify your settings to prevent dynamic mapping creation. At the same time you will want to create your own mapping.
For example, this will create the mappings for the logstash data but also restrict any dynamic mapping creation via "strict":
$ curl -XPUT 'http://localhost:9200/4glogs/logs/_mapping' -d '
{
"logs" : {
"dynamic": "strict",
"properties" : {
"@timestamp": {
"type": "date",
"format": "dateOptionalTime"
},
"@version": {
"type": "string"
},
"message": {
"type": "string"
}
}
}
}
'
Keep in mind that the index name "4glogs" and the type "logs" need to match what is coming from logstash.
For my production systems I generally prefer to turn off dynamic mapping as it avoids accidental mapping creation.
The following links should be useful if you want to make adjustments to your dynamic mappings:
https://www.elastic.co/guide/en/elasticsearch/guide/current/dynamic-mapping.html
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/custom-dynamic-mapping.html
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/dynamic-mapping.html
| Logstash | 24,924,248 | 19 |
I have JSON file that I'm sending to ES through logstash. I would like to remove 1 field ( It's deep field ) in the JSON - ONLY if the value is NULL.
Part of the JSON is:
"input": {
"startDate": "2015-05-27",
"numberOfGuests": 1,
"fileName": "null",
"existingSessionId": "XXXXXXXXXXXXX",
**"radius": "null",**
"nextItemReference": "51",
"longitude": -99.12,
"endDate": "2015-05-29",
"thumbnailHeight": 200,
"thumbnailWidth": 300,
"latitude": 19.42,
"numOfRooms": "1"
},
Part in the logstash.conf file is :
if [input.radius] == "null" {
mutate {
remove_field => [ "input.radius" ]
}
}
This is inside the filter of course.
How can I remove this field if the value is null?
| Nested fields aren't referred with [name.subfield] but [field][subfield]. This should work for you:
if [input][radius] == "null" {
mutate {
remove_field => [ "[input][radius]" ]
}
}
Note that if there is no "input" field, the [input][radius] reference will create an empty "input" dictionary. To avoid that you can do this:
if [input] and [input][radius] == "null" {
mutate {
remove_field => [ "[input][radius]" ]
}
}
See the Logstash documentation for details and more examples.
| Logstash | 30,369,148 | 19 |
Is the Logstash configuration reloaded every time the agent is restarted? It doesn't seem to pick up my changes immediately (e.g. changed type value)
I'm running it with an embedded elasticsearch v.0.90.7 on Windows 7 and Kibana 3.
Thank you very much!
Regards,
Paul
| Hot Reload is not yet supported. There is a Issue.
The logstash configuration is loaded at startup, so, if you kill your process (to restart it later), there is no reason why it does not work.
| Logstash | 20,695,665 | 18 |
I want to send logs from a Java app to ElasticSearch, and the conventional approach seems to be to set up Logstash on the server running the app, and have logstash parse the log files (with regex...!) and load them into ElasticSearch.
Is there a reason it's done this way, rather than just setting up log4J (or logback) to log things in the desired format directly into a log collector that can then be shipped to ElasticSearch asynchronously? It seems crazy to me to have to fiddle with grok filters to deal with multiline stack traces (and burn CPU cycles on log parsing) when the app itself could just log it the desired format in the first place?
On a tangentially related note, for apps running in a Docker container, is best practice to log directly to ElasticSearch, given the need to run only one process?
| If you really want to go down that path, the idea would be to use something like an Elasticsearch appender (or this one or this other one) which would ship your logs directly to your ES cluster.
However, I'd advise against it for the same reasons mentioned by @Vineeth Mohan. You'd also need to ask yourself a couple questions, but mainly what would happen if your ES cluster goes down for any reason (OOM, network down, ES upgrade, etc)?
There are many reasons why asynchronicity exists, one of which is robustness of your architecture and most of the time that's much more important than burning a few more CPU cycles on log parsing.
Also note that there is an ongoing discussion about this very subject going on in the official ES discussion forum.
| Logstash | 32,302,421 | 18 |
I am logging to logstash,in json format,
my logs have the following fields, each field is a string and the atts field is a stringified json (note: atts sub fields are different each time)
here is an example:
{"name":"bob","last":"builder", "atts":"{\"a\":111, \"b\":222}"}
I would like to parse it to something like this:
{
"name" => "bob",
"last" => "builder"
"atss" => {
"a" => 111,
"b" => 222}
}
here is my configuration:
input { stdin { } }
filter {
json {
source => "message"
target => "parsed"
}
}
output { stdout { codec => rubydebug }}
ok,
so now I get this:
{
"@timestamp" => 2017-04-05T12:19:04.090Z,
"parsed" => {
"atss" => "{\"a\":111, \"b\":222}",
"name" => "bob",
"last" => "the builder"
},
"@version" => "1",
"host" => "0.0.0.0"
}
how can I parse the atts field to json so I receive:
{
"@timestamp" => 2017-04-05T12:19:04.090Z,
"parsed" => {
"atss" =>
{"a" => 111,
"b" => 222},
"name" => "bob",
"last" => "the builder"
},
"@version" => "1",
"host" => "0.0.0.0"
}
| thanks to @Alcanzar here is what I did
input {
stdin { }
}
filter {
json {
source => "message"
target => "message"
}
json {
source => "[message][atts]"
target => "[message][atts]"
}
}
output { stdout { codec => rubydebug }}
| Logstash | 43,232,683 | 18 |
I am using logstash jdbc to keep the things syncd between mysql and elasticsearch. Its working fine for one table. But now I want to do it for multiple tables. Do I need to open multiple in terminal
logstash agent -f /Users/logstash/logstash-jdbc.conf
each with a select query or do we have a better way of doing it so we can have multiple tables being updated.
my config file
input {
jdbc {
jdbc_driver_library => "/Users/logstash/mysql-connector-java-5.1.39-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/database_name"
jdbc_user => "root"
jdbc_password => "password"
schedule => "* * * * *"
statement => "select * from table1"
}
}
output {
elasticsearch {
index => "testdb"
document_type => "table1"
document_id => "%{table_id}"
hosts => "localhost:9200"
}
}
| You can definitely have a single config with multiple jdbc input and then parametrize the index and document_type in your elasticsearch output depending on which table the event is coming from.
input {
jdbc {
jdbc_driver_library => "/Users/logstash/mysql-connector-java-5.1.39-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/database_name"
jdbc_user => "root"
jdbc_password => "password"
schedule => "* * * * *"
statement => "select * from table1"
type => "table1"
}
jdbc {
jdbc_driver_library => "/Users/logstash/mysql-connector-java-5.1.39-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/database_name"
jdbc_user => "root"
jdbc_password => "password"
schedule => "* * * * *"
statement => "select * from table2"
type => "table2"
}
# add more jdbc inputs to suit your needs
}
output {
elasticsearch {
index => "testdb"
document_type => "%{type}" # <- use the type from each input
hosts => "localhost:9200"
}
}
| Logstash | 37,613,611 | 17 |
Can anyone show me what an if statement with a regex looks like in logstash?
My attempts:
if [fieldname] =~ /^[0-9]*$/
if [fieldname] =~ "^[0-9]*$"
Neither of which work.
What I intend to do is to check if the "fieldname" contains an integer
| To combine the other answers into a cohesive answer.
Your first format looks correct, but your regex is not doing what you want.
/^[0-9]*$/ matches:
^: the beginning of the line
[0-9]*: any digit 0 or more times
$: the end of the line
So your regex captures lines that are exclusively made up of digits. To match on the field simply containing one or more digits somewhere try using /[0-9]+/ or /\d+/ which are equivalent and each match 1 or more digits regardless of the rest of the line.
In total you should have:
if [fieldname] =~ /\d+/ {
# do stuff
}
| Logstash | 42,341,778 | 17 |
I need to customize log messages to a JSON format in my Rails app.
To illustrate, currently the log messages my app produces look like this:
I, [2015-04-24T11:52:06.612993 #90159] INFO -- : Started GET "/time_entries" for ::1 at 2015-04-24 11:52:06 -0400
As a result, log messages like the line above get written to my log.
I need to change this format to JSON. How can I make the log output formatted exactly like the following example?
{
"type" : "info",
"time" : "2015-04-24T11:52:06.612993",
"message" : "Started GET "/time_entries" for ::1 at 2015-04-24 11:52:06 -0400"
}
Note: It doesn't need to be pretty printed like my example, it just has to have the JSON format.
| You can configure rails to specify your own log formatter:
config.log_formatter defines the formatter of the Rails logger. This option defaults to an instance of ActiveSupport::Logger::SimpleFormatter for all modes except production, where it defaults to Logger::Formatter.
You can provide your own class to output the log information:
class MySimpleFormatter < ActiveSupport::Logger::SimpleFormatter
def call(severity, timestamp, _progname, message)
{
type: severity,
time: timestamp,
message: message
}.to_json
end
end
To configure your new class you'd need to add a config line:
config.log_formatter = MySimpleFormatter.new
| Logstash | 29,855,097 | 16 |
I wanted to make a copy of a nested field in a Logstash filter but I can't figure out the correct syntax.
Here is what I try:
incorrect syntax:
mutate {
add_field => { "received_from" => %{beat.hostname} }
}
beat.hostname is not replaced
mutate {
add_field => { "received_from" => "%{beat.hostname}" }
}
beat.hostname is not replaced
mutate {
add_field => { "received_from" => "%{[beat][hostname]}" }
}
beat.hostname is not replaced
mutate {
add_field => { "received_from" => "%[beat][hostname]" }
}
No way. If I give a non nested field it works as expected.
The data structure received by logstash is the following:
{
"@timestamp" => "2016-08-24T13:01:28.369Z",
"beat" => {
"hostname" => "etg-dbs-master-tmp",
"name" => "etg-dbs-master-tmp"
},
"count" => 1,
"fs" => {
"device_name" => "/dev/vdb",
"total" => 5150212096,
"used" => 99287040,
"used_p" => 0.02,
"free" => 5050925056,
"avail" => 4765712384,
"files" => 327680,
"free_files" => 326476,
"mount_point" => "/opt/ws-etg/datas"
},
"type" => "filesystem",
"@version" => "1",
"tags" => [
[0] "topbeat"
],
"received_at" => "2016-08-24T13:01:28.369Z",
"received_from" => "%[beat][hostname]"
}
| EDIT:
Since you didn't show your input message I worked off your output. In your output the field you are trying to copy into already exists, which is why you need to use replace. If it does not exist, you do in deed need to use add_field. I updated my answer for both cases.
EDIT 2: I realised that your problem might be to access the value that is nested, so I added that as well :)
you are using the mutate filter wrong/backwards.
First mistake:
You want to replace a field, not add one. In the docs, it gives you the "replace" option. See: https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-replace
Second mistake, you are using the syntax in reverse. It appears that you believe this is true:
"text I want to write" => "Field I want to write it in"
While this is true:
"myDestinationFieldName" => "My Value to be in the field"
With this knowledge, we can now do this:
mutate {
replace => { "[test][a]" => "%{s}"}
}
or if you want to actually add a NEW NOT EXISTING FIELD:
mutate {
add_field => {"[test][myNewField]" => "%{s}"}
}
Or add a new existing field with the value of a nested field:
mutate {
add_field => {"some" => "%{[test][a]}"}
}
Or more details, in my example:
input {
stdin {
}
}
filter {
json {
source => "message"
}
mutate {
replace => { "[test][a]" => "%{s}"}
add_field => {"[test][myNewField]" => "%{s}"}
add_field => {"some" => "%{[test][a]}"}
}
}
output {
stdout { codec => rubydebug }
}
This example takes stdin and outputs to stdout. It uses a json filter to parse the message, and then the mutate filter to replace the nested field. I also add a completely new field in the nested test object.
And finally creates a new field "some" that has the value of test.a
So for this message:
{"test" : { "a": "hello"}, "s" : "to_Repalce"}
We want to replace test.a (value: "Hello") with s (Value: "to_Repalce"), and add a field test.myNewField with the value of s.
On my terminal:
artur@pandaadb:~/dev/logstash$ ./logstash-2.3.2/bin/logstash -f conf2/
Settings: Default pipeline workers: 8
Pipeline main started
{"test" : { "a": "hello"}, "s" : "to_Repalce"}
{
"message" => "{\"test\" : { \"a\": \"hello\"}, \"s\" : \"to_Repalce\"}",
"@version" => "1",
"@timestamp" => "2016-08-24T14:39:52.002Z",
"host" => "pandaadb",
"test" => {
"a" => "to_Repalce",
"myNewField" => "to_Repalce"
},
"s" => "to_Repalce"
"some" => "to_Repalce"
}
The value has succesfully been replaced.
A field "some" with the replaces value has been added
A new field in the nested array has been added.
if you use add_field, it will convert a into an array and append your value there.
Hope this solves your issue,
Artur
| Logstash | 39,124,087 | 16 |
I'm trying to download jdbc connector, but I cannot find mac os from the selection options from the link below:
https://dev.mysql.com/downloads/connector/j/
Where can I download mysql connector for mac os?
Or is it the case that jdbc connector is already installed for mac os?
I'm trying to using logstash to transfer mysql data into elasticsearch.
| MySQL Connector/J is a Java library, and it is a pure Java driver, so it is platform independent. The various installers offered on the download page are just to simplify installation (although generally, installing Java libraries using an installer makes very little sense to me).
For MacOS, you can use 'platform independent' and either download the tar.gz or the zip, whichever you feel is simpler to unpack.
For development purposes, it would be simpler to use the MySQL Connector/J Maven dependency, for example:
<dependency>
<groupId>com.mysql</groupId>
<artifactId>mysql-connector-j</artifactId>
<version>8.2.0</version>
</dependency>
(NOTE: versions 8.0.30 and older use artifactId mysql-connector-java).
| Logstash | 53,312,893 | 16 |
Is it possible to use kibana front-end along with a mongodb back-end without using elastic search?
I'm using logstash to parse logs and store in mongodb and want to use kibana to display data?
If not, are there any alternatives to implement kibana+mongodb?
| I'm afraid that Kibana is specifically designed to use the Elasticsearch API.
While they do both provide JSON responses, they don't return compatible data structures and even if they did, Mongo would not provide the same features (facets/filters) that Kibana makes heavy use of.
You could probably index your MongoDB data in Elasticsearch following instructions similar to https://coderwall.com/p/sy1qcw but then you are unnecessarily duplicating your data in 2 systems.
| Logstash | 24,248,609 | 15 |
I have data in an AWS RDS, and I would like to pipe it over to an AWS ES instance, preferably updating once an hour, or similar.
On my local machine, with a local mysql database and Elasticsearch database, it was easy to set this up using Logstash.
Is there a "native" AWS way to do the same thing? Or do I need to set up an EC2 server and install Logstash on it myself?
| You can achieve the same thing with your local Logstash, simply point your jdbc input to your RDS database and the elasticsearch output to your AWS ES instance. If you need to run this regularly, then yes, you'd need to setup a small instance to run Logstash on it.
A more "native" AWS solution to achieve the same thing would include the use of Amazon Kinesis and AWS Lambda.
Here's a good article explaining how to connect it all together, namely:
how to stream RDS data into a Kinesis Stream
configuring a Lambda function to handle the stream
push the data to your AWS ES instance
| Logstash | 43,688,180 | 15 |
I'm trying to sync data between MySQL and Elasticsearch with Logstash.
I set multiple jdbc inputs and multiple outputs to different elasticsearch indexes ... and something I am doing wrong because everything is going to the else block.
Here is my config:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql:127.0.0.1:3306/whatever"
jdbc_user => "xxx"
jdbc_password => "yyy"
jdbc_driver_library => "mysql-connector-java-5.1.41.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
schedule => "* * * * *"
statement => "SELECT * from table1 WHERE updated_at > :sql_last_value order by updated_at"
use_column_value => true
tracking_column => updated_at
type => "table1"
last_run_metadata_path => "/opt/logstash-5.4.0/sql-last-values/table1"
}
jdbc {
jdbc_connection_string => "jdbc:mysql:127.0.0.1:3306/whatever"
jdbc_user => "xxx"
jdbc_password => "yyy"
jdbc_driver_library => "mysql-connector-java-5.1.41.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
schedule => "* * * * *"
statement => "SELECT * from table2 WHERE updated_at > :sql_last_value order by updated_at"
use_column_value => true
tracking_column => updated_at
type => "table2"
last_run_metadata_path => "/opt/logstash-5.4.0/sql-last-values/table2"
}
}
output {
if [type] == "table1" {
elasticsearch {
hosts => ["localhost:9200"]
index => "table1"
document_type => "table1"
document_id => "%{id}"
}
file {
codec => json_lines
path => "/opt/logstash-5.4.0/logs/table1.log"
}
} else if [type] == "table2" {
elasticsearch {
hosts => ["localhost:9200"]
index => "table2"
document_type => "table2"
document_id => "%{id}"
}
} else {
file {
codec => json_lines
path => "/opt/logstash-5.4.0/logs/unknown.log"
}
}
}
What am I doing wrong ? everything is going to the else block, to the /opt/logstash-5.4.0/logs/unknown.log
Is wrong my approach ? Should I have multiple files ?
thank you in advance
| Solution found!
I used tags instead of type
input {
jdbc {
...
tags => "table1"
...
}
jdbc {
...
tags => "table2"
...
}
}
output {
if "table1" in [tags] {
}
https://discuss.elastic.co/t/solved-multiple-logstash-config-file/51692/10
| Logstash | 44,293,952 | 15 |
I am trying to connect Logstash with Elasticsearch but cannot get it working.
Here is my logstash conf:
input {
stdin {
type => "stdin-type"
}
file {
type => "syslog-ng"
# Wildcards work, here :)
path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
}
}
output {
stdout { }
elasticsearch{
type => "all"
embedded => false
host => "192.168.0.23"
port => "9300"
cluster => "logstash-cluster"
node_name => "logstash"
}
}
And I only changed these details in my elasticsearch.yml
cluster.name: logstash-cluster
node.name: "logstash"
node.master: false
network.bind_host: 192.168.0.23
network.publish_host: 192.168.0.23
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["localhost"]
With these configurations I could not make Logstash connect to ES. Can someone please suggest where I am going wrong?
| First, I suggest matching your "type" attributes up.
In your input you have 2 different types, and in your output you have a type that doesn't exists in any of your inputs.
For testing, change your output to:
output {
stdout { }
elasticsearch{
type => "stdin-type"
embedded => false
host => "192.168.0.23"
port => "9300"
cluster => "logstash-cluster"
node_name => "logstash"
}
}
Then,have you created an index on your ES instance?
From the guides I've used, and my own experience (others may have another way that works) I've always used an index so that when I push something into ES, I can use the ES API and quickly check if the data has gone in or not.
Another suggestion would be to simply run your Logstash forwarder and indexer with debug flags to see what is going on behind the scenes.
Can you connect to your ES instance on 127.0.0.1? Also, try to experiment with the port and host. As a rather new user of the Logstash system, I found that my understanding at the start went against the reality of the setup. Sometimes the host IP isn't what you think it is, as well as the port. If you are willing to check your network and identify listening ports and IPs, then you can sort this out, otherwise do some intelligent trial and error.
I highly recommend this guide as a comprehensive starting point. Both points I've mentioned are (in)directly touched upon in the guide. While the guide has a slightly more complex starting point, the ideas and concepts are thorough.
| Logstash | 17,046,047 | 14 |
Im trying to parse my apache2 error log and im having a bit of trouble.. It doesnt seem to be matching the filter. Im pretty sure the timestamp piece is wrong, but im not sure, and i cant really find any documentation to figure it out. Also, is there a way to get what is in fields.errmsg to me @message?
Log
[Wed Jun 26 22:13:22 2013] [error] [client 10.10.10.100] PHP Fatal error: Uncaught exception '\Foo\Bar'
Shipper Config
input {
file {
'path' => '/var/log/apache2/*-error.log'
'type' => 'apache-error'
}
}
filter {
grok {
type => "apache-error"
pattern => "\[%{HTTPDATE:timestamp}\] \[%{WORD:class}\] \[%{WORD:originator} %{IP:clientip}\] %{GREEDYDATA:errmsg}"
}
}
output {
stdout {}
redis {
'data_type' => 'list'
'host' => 'logstash.server.net'
'key' => 'logstash'
}
}
| Ahoy!
I know I'm a little late to the party, but here it goes!
I created a /etc/logstash/patterns.d/ directory on the system and thew a file named apache-error in it containing:
APACHE_ERROR_TIME %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{YEAR}
APACHE_ERROR_LOG \[%{APACHE_ERROR_TIME:timestamp}\] \[%{LOGLEVEL:loglevel}\] (?:\[client %{IPORHOST:clientip}\] ){0,1}%{GREEDYDATA:errormsg}
/etc/logstash/patterns.d/ will be referenced in the logstash configuration as follows:
grok {
patterns_dir => [ "/etc/logstash/patterns.d" ]
match => [ "message", "%{APACHE_ERROR_LOG}" ]
}
You can test it out at Grok Debugger, like Adam mentioned in his comment. Seems to work just fine with the sample log entry you send. Has been working pretty solidly me with one of my clients.
The pattern above puts the final message in errormsg field. So I just delete the message field.
This is what I am currently using in my logstash configuration:
filter {
if [type] == "apache_error_log" {
grok {
patterns_dir => [ "/etc/logstash/patterns.d" ]
match => [ "message", "%{APACHE_ERROR_LOG}" ]
}
if !("_grokparsefailure" in [tags]) {
mutate {
remove_field => [ "message" ]
add_field => ["timestamp_submitted", "%{@timestamp}"]
}
date {
# Try to pull the timestamp from the 'timestamp' field (parsed above with
# grok). The apache time format looks like: "18/Aug/2011:05:44:34 -0700"
# Sat Feb 08 06:31:09 2014
match => [ "timestamp", "EEE MMM dd HH:mm:ss yyyy" ]
remove_field => [ "timestamp" ]
}
geoip {
source => "clientip"
}
}
}
}
Note that I am using a type of apache_error_log instead of apache-error.
Give it s shot. I would love to know if this works for you and others!
Cheers!
| Logstash | 17,331,593 | 14 |
I'm still a bit confused after reading documentation provided by logstash. I'm planning on writing an Android app, and I want to log the activity of the app. Logs will be sent over the network. is logstash not the right solution? because it needs to have an "agent" installed on systems that produces log.
I want a system that can store log from the app activity, but it also needs to be able to export the collected logs into plain text file. I know logstash can output to elasticsearch, but i'm not sure if it can export to plaintext file at the same time. or is this a task that ElasticSearch should do?
thanks a ton for any input you can provide
| Logstash forwarder isn't currently available for android/ios unfortunately, nor could I find any existing solution for it from the community. (I asked the same question here but was voted off-topic because it was deemed asking for tool/library suggestions).
Your best bet unfortunately is either to write one yourself (which isn't trivial: you'll need to factor in offline connectivity, batching, scheduling, compressions, file-tracking, and so on), or to use other (usually commercial) logging services such as LogEntries.
By the way, the android/ios clients for LogEntries is open source. I'm not clear on its OSS licensing, but if you're to write an agent for logstash yourself, you could perhaps start by looking at LogEntries' android agent implementation, which already solves all the technical problems mentioned above. https://github.com/logentries/le_android.
And to answer your other question, yes logstash should receive your log (from the mobile-device), usually via lumberjack input (aka logstash forwarder). Logstash can then persist & index these log files to elasticsearch, providing it's configured that way
| Logstash | 20,099,362 | 14 |
I have a drupal watchdog syslog file that I want to parse into essentially two nested fields, the syslog part and the message part so that I get this result
syslogpart: {
timestamp: "",
host: "",
...
},
messagepart:{
parsedfield1: "",
parsedfield2: "",
...
}
I tried making a custom pattern that looks like this:
DRUPALSYSLOG (%{SYSLOGTIMESTAMP:date} %{SYSLOGHOST:logsource} %{WORD:program}: %{URL:domain}\|%{EPOCH:epoch}\|%{WORD:instigator}\|%{IP:ip}\|%{URL:referrer}\|%{URL:request}\|(?<user_id>\d+)\|\|)
and then run match => ['message', '%{DRUPALSYSLOG:drupal}'}
but I don't get a nested response, I get a textblock drupal: "ALL THE MATCHING FIELDS IN ONE STRING" and then all the matches separately as well but not nested under drupal but rather on the same level.
| Actually, you can do something like that in your pattern config
%{WORD:[drupal][program]}
It will create the json object like
drupal:{
program: "..."
}
| Logstash | 28,748,674 | 14 |
I'm having trouble with ElasticSearch, how can I change id to another field in log file ?
| In the elasticsearch output you can set the document_id for the event you are shipping. This will end up being the _id in elasticsearch. You can use all sort of parameters / field references / ... that are available in logstash config. Like so:
elasticsearch {
host => yourEsHost
cluster => "yourCluster"
index => "logstash-%{+YYYY.MM.dd}"
document_id => "%{someFieldOfMyEvent}"
}
In this example someFieldOfMyEvent ends up being the _id of this event in ES.
| Logstash | 30,391,898 | 14 |
recently I started working on ElasticSearch (ES) implementation into legacy e-commerce app written in PHP using MySQL. I am completely new to all this stuff and reading the docs is fine, yet I really need somebody with experience to advise me.
From the ES documentation I was able to setup a new cluster and I also found out that rivers are deprecated and should be replaced, so I replaced them with Logstash and JDBC MySQL connector.
At this point I have:
ElasticSearch
Logstash
JDBC MySQL driver
MySQL server
The database structure of the application is not really optimal and is very hard to replace, but I'd like to replicate it into the ES index in the best possible way.
DB Structure:
Products
+-------------------------------+-------+--------+
| Id | Title | Price |
+-------------------------------+-------+--------+
| 00c8234d71c4e94f725cd432ebc04 | Alpha | 589,00 |
| 018357657529fef056cf396626812 | Beta | 355,00 |
| 01a2c32ceeff0fc6b7dd4fc4302ab | Gamma | 0,00 |
+-------------------------------+-------+--------+
Flags
+------------+-------------+
| Id | Title |
+------------+-------------+
| sellout | Sellout |
| discount | Discount |
| topproduct | Top Product |
+------------+-------------+
flagsProducts (n:m pivot)
+------+-------------------------------+------------+------------+
| Id | ProductId | FlagId | ExternalId |
+------+-------------------------------+------------+------------+
| 1552 | 00c8234d71c4e94f725cd432ebc04 | sellout | NULL |
| 2845 | 00c8234d71c4e94f725cd432ebc04 | topproduct | NULL |
| 9689 | 018357657529fef056cf396626812 | discount | NULL |
| 4841 | 01a2c32ceeff0fc6b7dd4fc4302ab | discount | NULL |
+------+-------------------------------+------------+------------+
Those string IDs are a complete disaster (but I have to deal with them now). At first I thought I should do a flat structure of Products index to ES, but what about multiple entity bindings?
| That's a great start!
I would definitely flatten it all out (i.e. denormalize) and come up with product documents that look like the one below. That way you get rid of the N:M relationship between products and flags by simply creating a flags array for each product. It will thus be easier to query those flags.
{
"id": "00c8234d71c4e94f725cd432ebc04",
"title": "Alpha",
"price": 589.0,
"flags": ["Sellout", "Top Product"]
}
{
"id": "018357657529fef056cf396626812",
"title": "Beta",
"price": 355.0,
"flags": ["Discount"]
}
{
"id": "01a2c32ceeff0fc6b7dd4fc4302ab",
"title": "Gamma",
"price": 0.0,
"flags": ["Discount"]
}
The product mapping type would look like this:
PUT products
{
"mappings": {
"product": {
"properties": {
"id": {
"type": "string",
"index": "not_analyzed"
},
"title": {
"type": "string"
},
"price": {
"type": "double",
"null_value": 0.0
},
"flags": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
Since you have the logstash jdbc input already, all you're missing is the proper SQL query to fetch the products and associated flags.
SELECT p.Id as id, p.Title as title, p.Price as price, GROUP_CONCAT(f.Title) as flags
FROM Products p
JOIN flagsProducts fp ON fp.ProductId = p.Id
JOIN Flags f ON fp.FlagId = f.id
GROUP BY p.Id
Which would get you rows like these:
+-------------------------------+-------+-------+---------------------+
| id | title | price | flags |
+-------------------------------+-------+-------+---------------------+
| 00c8234d71c4e94f725cd432ebc04 | Alpha | 589 | Sellout,Top product |
| 018357657529fef056cf396626812 | Beta | 355 | Discount |
| 01a2c32ceeff0fc6b7dd4fc4302ab | Gamma | 0 | Discount |
+-------------------------------+-------+-------+---------------------+
Using Logstash filters you can then split the flags into an array and you're good to go.
| Logstash | 36,915,428 | 14 |
I get above Mapper Parsing Error on Elasticsearch when indexing log from filebeat.
I tried both Filebeat -> Elasticserach and Filebeat -> Logstash -> Elasticsearch approach.
I have followed their own documentations, I installed filebeat template as per instructed and verified from Loading the Index Template in Elasticsearch | Filebeat Reference
My elasticsearch is normally working fine with my other data indexing and I tested them on Kibana. Its an official docker Docker Hub | Elasticsearch installation.
Googled a lot without any luck so, any help is appreciated.
UPDATE 1:
ES version: 2.3.3 (I believe latest one)
Template file is the default shipped with filebeat.
{
"mappings": {
"_default_": {
"_all": {
"norms": false
},
"dynamic_templates": [
{
"fields": {
"mapping": {
"ignore_above": 1024,
"type": "keyword"
},
"match_mapping_type": "string",
"path_match": "fields.*"
}
}
],
"properties": {
"@timestamp": {
"type": "date"
},
"beat": {
"properties": {
"hostname": {
"ignore_above": 1024,
"type": "keyword"
},
"name": {
"ignore_above": 1024,
"type": "keyword"
}
}
},
"input_type": {
"ignore_above": 1024,
"type": "keyword"
},
"message": {
"norms": false,
"type": "text"
},
"offset": {
"type": "long"
},
"source": {
"ignore_above": 1024,
"type": "keyword"
},
"type": {
"ignore_above": 1024,
"type": "keyword"
}
}
}
},
"order": 0,
"settings": {
"index.refresh_interval": "5s"
},
"template": "filebeat-*"
}
UPDATE 2:
You are right, see
#/usr/share/filebeat/bin/filebeat --version
filebeat version 5.0.0-alpha2 (amd64), libbeat 5.0.0-alpha2
Though this is posting apache log to logstash. But I can't get this vhost_combined log in right format
sub1.example.com:443 1.9.202.41 - - [03/Jun/2016:06:58:17 +0000] "GET /notifications/pendingCount HTTP/1.1" 200 591 0 32165 "https://sub1.example.com/path/index?var=871190" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36"
"message" => "%{HOSTNAME:vhost}\:%{NUMBER:port} %{COMBINEDAPACHELOG}"
| You cannot use "type": "keyword" with ES 2.3.3 since that's a new data type in ES 5 (currently in alpha3)
You need to replace all those occurrences with
"type": "string",
"index": "not_analyzed"
You need to use filebeat.template-es2x.json instead.
| Logstash | 37,599,410 | 14 |
I am using ELK stack for centralised logging from my Django server. My ELK stack is on a remote server and logstash.conf looks like this:
input {
tcp {
port => 5959
codec => json
}
}
output {
elasticsearch {
hosts => ["xx.xx.xx.xx:9200"]
}
}
Both services elasticsearch and logstash are working (checked using docker-compose logs logstash).
My Django server's settings file has logging configured as below:
LOGGING = {
'version': 1,
'handlers': {
'logstash': {
'level': 'INFO',
'class': 'logstash.TCPLogstashHandler',
'host': 'xx.xx.xx.xx',
'port': 5959, # Default value: 5959
'version': 0, # Version of logstash event schema. Default value: 0 (for backward compatibility of the library)
'message_type': 'django', # 'type' field in logstash message. Default value: 'logstash'.
'fqdn': True, # Fully qualified domain name. Default value: false.
'tags': ['django.request'], # list of tags. Default: None.
},
},
'loggers': {
'django.request': {
'handlers': ['logstash'],
'level': 'DEBUG',
},
}
}
I run my Django server and Logstash handler handles the logs as console shows no logs. I used the python-logstash library in Django server to construct the above conf, but the logs are not sent to my remote server.
I checked through many questions, verified that services are running and ports are correct, but I have no clue why the logs are not being sent to Logstash.
| Looking at the configuration, logger "django.request" is set to level "DEBUG" and the handler "logstash" is set to level "INFO". My guess is that the handler will not process DEBUG messages. I'm not sure though.
Set the same level for the logger and the handler to test that it works.
What level to use depends on what you want from your logs. In this case I guess level INFO would suffice.
If not already take a look at Django logging
NOTE: From comments it seems not to solve the problem but I hope it is useful anyway.
UPDATE:
I tried the following configuration and it catches 404 and 500 errors in the "debug.log".
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'logfile': {
'level': 'WARNING',
'class': 'logging.FileHandler',
'filename': os.path.join(PROJECT_DIR, 'debug.log'),
},
},
'loggers': {
'django.request': {
'handlers': ['logfile'],
'level': 'WARNING',
'propagate': True,
},
}}
With this test configuration the logstash handler should at least receive the message/logrecord. If no luck I suggest to try debug the logstash.TCPLogstashHandler and the
SocketHandler (inherited by TCPLogstashHandler) to make sure they receive the emitted record.
| Logstash | 45,117,988 | 14 |
I am trying add custom field into logstash appender in logback-spring.xml like that:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="stash" class="net.logstash.logback.appender.LogstashSocketAppender">
<host>xx.xx.xx.xx</host>
<port>xxxxx</port>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<includeMdcKeyName>myField</includeMdcKeyName>
</encoder>
</appender>
<root level="info">
<appender-ref ref="stash" />
</root>
</configuration>
It gives me error:
Exception in thread "main" java.lang.IllegalStateException: Logback configuration error detected:
ERROR in ch.qos.logback.core.joran.spi.Interpreter@34:71 - no applicable action for [encoder], current ElementPath is [[configuration][appender][encoder]]
When I tried console appender and I tried print that field like in sample below it worked.
<layout>
<Pattern>%-4r [%thread] %-5level My Field: [%X{myField:--}] %msg%n</Pattern>
</layout>
Can you tell me what I did wrong with udp appender? Thank you in advice.
| You're using an UDP appender, and it does not have an encoder. You should use TCP Appender (LogstashTcpSocketAppender instead of LogstashSocketAppender):
<appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>xx.xx.xx.xx:xxxxx</destination>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<includeMdcKeyName>myField</includeMdcKeyName>
</encoder>
</appender>
Take a look a demo project I've created here.
This code (Kotlin):
MDC.put("mdc", "so53558553")
LOG.warn("Warn")
With a logback-spring.xml like this:
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>localhost:5000</destination>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<includeMdcKeyName>mdc</includeMdcKeyName>
</encoder>
</appender>
Produces such records in Logstash:
{
"level_value" => 30000,
"mdc" => "so53558553",
"port" => 35450,
"logger_name" => "by.dev.madhead.playgrounds.so53558553.SpringBootConsoleApplication",
"host" => "172.17.0.1",
"@version" => "1",
"@timestamp" => 2018-12-03T01:16:28.793Z,
"thread_name" => "main",
"message" => "Warn",
"level" => "WARN"
}
As you see, mdc values are seen by Logstash as a field in the LoggingEvent.
EDIT
You may not see your field in Kibana due to an ELK misconfiguration. I'm pasting my Logstash pipiline config (/etc/logstash/conf.d/01-input.conf) just for the reference (it's very basic):
input {
tcp {
port => 5000
codec => json_lines
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "logback-%{+YYYY.MM.dd}"
}
}
Then I've configured logs in Kibana with logback-* pattern:
And voila:
| Logstash | 53,558,553 | 14 |
I've tried updating the number of replicas as follows, according to the documentation
curl -XPUT 'localhost:9200/_settings' -d '
{ "index" : { "number_of_replicas" : 4 } }'
This correctly changes the replica count for existing nodes. However, when logstash creates a new index the following day, number_of_replicas is set to the old value.
Is there a way to permanently change the default value for this setting without updating all the elasticsearch.yml files in the cluster and restarting the services?
FWIW I've also tried
curl -XPUT 'localhost:9200/logstash-*/_settings' -d '
{ "index" : { "number_of_replicas" : 4 } }'
to no avail.
| Yes, you can use index templates. Index templates are a great way to set default settings (including mappings) for new indices created in a cluster.
Index Templates
Index templates allow to define templates that will automatically be
applied to new indices created. The templates include both settings
and mappings, and a simple pattern template that controls if the
template will be applied to the index created.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-templates.html
For your example:
curl -XPUT 'localhost:9200/_template/logstash_template' -d '
{
"template" : "logstash-*",
"settings" : {"number_of_replicas" : 4 }
} '
This will set the default number of replicas to 4 for all new indexes that match the name "logstash-*". Note that this will not change existing indexes, only newly created ones.
| Logstash | 24,553,718 | 13 |
So, I have a web platform that prints a JSON file per request containing some log data about that request. I can configure several rules about when should it log stuff, only at certain levels, etc...
Now, I've been toying with the Logstash + Elasticsearch + Kibana3 stack, and I'd love to find a way to see those logs in Kibana. My question is, is there a way to make Logstash import these kind of files, or would I have to write a custom input plugin for it? I've searched around and for what I've seen, plugins are written in Ruby, a language I don't have experience with.
| Logstash is a very good tool for processing dynamic files.
Here is the way to import your json file into elasticsearch using logstash:
configuration file:
input
{
file
{
path => ["/path/to/json/file"]
start_position => "beginning"
sincedb_path => "/dev/null"
exclude => "*.gz"
}
}
filter
{
mutate
{
replace => [ "message", "%{message}" ]
gsub => [ 'message','\n','']
}
if [message] =~ /^{.*}$/
{
json { source => message }
}
}
output
{
elasticsearch {
protocol => "http"
codec => json
host => "localhost"
index => "json"
embedded => true
}
stdout { codec => rubydebug }
}
example of json file:
{"foo":"bar", "bar": "foo"}
{"hello":"world", "goodnight": "moon"}
Note the json need to be in one line. if you want to parse a multiline json file, replace relevant fields in your configuration file:
input
{
file
{
codec => multiline
{
pattern => '^\{'
negate => true
what => previous
}
path => ["/opt/mount/ELK/json/*.json"]
start_position => "beginning"
sincedb_path => "/dev/null"
exclude => "*.gz"
}
}
filter
{
mutate
{
replace => [ "message", "%{message}}" ]
gsub => [ 'message','\n','']
}
if [message] =~ /^{.*}$/
{
json { source => message }
}
}
| Logstash | 25,977,423 | 13 |
We have data that is coming from external sources as below in csv file:
orderid,OrderDate,BusinessMinute,Quantity,Price
31874,01-01-2013,00:06,2,17.9
The data has date in one column and time in another column - I need to generate a time-stamp by combining those two columns together.
I am using csv filter to read the above data from file using below configuration in logstash - which is generating its own timestamp:
input {
file {
path => "/root/data/import/Order.csv"
start_position => "beginning"
}
}
filter {
csv {
columns => ["orderid","OrderDate","BusinessMinute","Quantity","Price"]
separator => ","
}
}
output {
elasticsearch {
action => "index"
host => "localhost"
index => "demo"
workers => 1
}
}
How to make the combination of OrderDate + Business Minute as the @timestamp?
| Use a mutate filter to combine the OrderDate and BusinessMinute fields into a single (temporary) field, then use the date filter and have it delete the field if it's successful.
filter {
mutate {
add_field => {
"timestamp" => "%{OrderDate} %{BusinessMinute}"
}
}
date {
match => ["timestamp", "..."]
remove_field => ["timestamp"]
}
}
| Logstash | 28,879,131 | 13 |
We are using the ELK for log aggregation. Is it possible to search for events that occured during a particular time range. Lets say I want to see all exceptions that occurred between 10am and 11am in last month.
Is it possible to extract the time part from @timestamp and do a range search on that somehow (similiar to date() in SQL)?
| Thanks to Magnus who pointed me to looking at scripted fields. Take a look at:
https://www.elastic.co/blog/kibana-4-beta-3-now-more-filtery
or
https://www.elastic.co/guide/en/elasticsearch/reference/1.3/search-request-script-fields.html
Unfortunately you can not use these scripted fields in queries but only in visualisations.
So I resorted to a workaround and use logstashs drop filter to remove the events I don't want to show up in Kibana in the first-place. That is not perfect for obvious reasons but it does the job.
| Logstash | 30,396,242 | 13 |
I have tab separated data which I want to input into logstash. Here is my configuration file:
input {
file {
path => "/*.csv"
type => "testSet"
start_position => "beginning"
}
}
filter {
csv {
separator => "\t"
}
}
output {
stdout {
codec => rubydebug
}
}
It simply looks for all .csv files and separates them using tabs. For an input like this:
col1 col2
data1 data2
logstash output is (for the two rows):
column1 => "col1\tcol2"
column1 => "data1\tdata2"
Obviously it is not correctly parsing it. I saw that this issue was brought up a while ago here but there was no solution. Does anyone know if this problem has been resolved or maybe there's another way to do it? Thanks!
| Instead of using "\t" as the seperator, input an actual tab.
like this:
filter {
csv {
separator => " "
}
}
| Logstash | 30,915,011 | 13 |
I have a Logstash instance running as a service that reads from Redis and outputs to Elasticsearch. I just noticed there was nothing new in Elasticsearch for the last few days, but the Redis lists were increasing.
Logstash log was filled with 2 errors repeated for thousands of lines:
:message=>"Got error to send bulk of actions"
:message=>"Failed to flush outgoing items"
The reason being:
{"error":"IllegalArgumentException[Malformed action/metadata line [107], expected a simple value for field [_type] but found [START_ARRAY]]","status":500},
Additionally, trying to stop the service failed repeatedly, I had to kill it. Restarting it emptied the Redis lists and imported everything to Elasticsearch. It seems to work ok now.
But I have no idea how to prevent that from happening again. The mentioned type field is set as a string for each input directive, so I don't understand how it could have become an array.
What am I missing?
I'm using Elasticsearch 1.7.1 and Logstash 1.5.3. The logstash.conf file looks like this:
input {
redis {
host => "127.0.0.1"
port => 6381
data_type => "list"
key => "b2c-web"
type => "b2c-web"
codec => "json"
}
redis {
host => "127.0.0.1"
port => 6381
data_type => "list"
key => "b2c-web-staging"
type => "b2c-web-staging"
codec => "json"
}
/* other redis inputs, only key/type variations */
}
filter {
grok {
match => ["msg", "Cache hit %{WORD:query} in %{NUMBER:hit_total:int}ms. Network: %{NUMBER:hit_network:int} ms. Deserialization %{NUMBER:hit_deserial:int}"]
add_tag => ["cache_hit"]
tag_on_failure => []
}
/* other groks, not related to type field */
}
output {
elasticsearch {
host => "[IP]"
port => "9200"
protocol=> "http"
cluster => "logstash-prod-2"
}
}
| According to your log message:
{"error":"IllegalArgumentException[Malformed action/metadata line [107], expected a simple value for field [_type] but found [START_ARRAY]]","status":500},
It seems you're trying to index a document with a type field that's an array instead of a string.
I can't help you without more of the logstash.conf file.
But check followings to make sure:
When you use add_field for changing the type you actually turn type into an array with multiple values, which is what Elasticsearch is complaining about.
You can use mutate join to convert arrays to strings: api link
filter {
mutate {
join => { "fieldname" => "," }
}
}
| Logstash | 32,354,862 | 13 |
I am parsing a set of data into an ELK stack for some non-tech folks to view. As part of this, I want to remove all fields except a specific known subset of fields from the events before sending into ElasticSearch.
I can explicitly specify each field to drop in a mutate filter like so:
filter {
mutate {
remove_field => [ "throw_away_field1", "throw_away_field2" ]
}
}
In this case, anytime a new field gets added to the input data (which can happen often since the data is pulled from a queue and used by multiple systems for multiple purposes) it would require an update to the filtering, which is extra overhead that's not needed. Not to mention if some sensitive data made it through between when the input streams were updated and when the filtering was updated, that could be bad.
Is there a way using the logstash filter to iterate over each field of an object, and remove_field if it is not in a provided list of field names? Or would I have to write a custom filter to do this? Basically, for every single object, I just want to keep 8 specific fields, and toss absolutely everything else.
It looks like very minimal if ![field] =~ /^value$/ type logic is available in the logstash.conf file, but I don't see any examples that would iterate over the fields themselves in a for each style and compare the field name to a list of values.
Answer:
After upgrading logstash to 1.5.0 to be able to use plugin extensions such as prune, the solution ended up looking like this:
filter {
prune {
interpolate => true
whitelist_names => ["fieldtokeep1","fieldtokeep2"]
}
}
| Prune whitelist should be what you're looking for.
For more specific control, dropping to the ruby filter is probably the next step.
| Logstash | 33,399,196 | 13 |
So, let's assume that I have a portion of a log line that looks something like this:
GET /restAPI/callMethod1/8675309
The GET matches a http method, and get's extracted, the remainder matches a URI, and also gets extracted. Now in the logstash config let's assume that I wanted to do something like this...
if [METHOD] == "GET" {
if [URI] (CONTAINS <--Is there a way to do this?) =="restAPI/callMethod1"{
....
Is there some way to do this? If so how would I go about doing that?
Thanks
| You can achieve it simply by using the =~ (regexp) operator like this (see conditionals):
if [METHOD] == "GET" {
if [URI] =~ /restAPI\/callMethod1/ {
...
| Logstash | 39,022,920 | 13 |
I am using Logstash-5.6.5 (in Windows) running in a standalone system (no cloud or cluster). Planning to watch some log files and post it to locally run elasticsearch. But when checked the Logstash's memory usage, without a configuration to watch any file it is showing around 600MB memory usage. When I add input file pipeline configurations further it adds memory futher (For watching 3 log files it added up to 70MB, but I am planning to add more upto 20 logs).
1. Is it the expected behaviour?
2. Is there any way to reduce the huge memory usage by logstash?
| After researching for couple of days below is my answer to my question.
Below are the ways we can optimize Logstash memory:
Logstash memory usage is primarily getting accumulated by heap size. This can be effectively controlled by setting the heap memory size in the environment variable LS_JAVA_OPTS as below, before launching Logstash (for Windows version in my case):
set "LS_JAVA_OPTS=-Xms512m –Xmx512m"
Otherwise may be this can be added in the setup.bat at the beginning of the file.
In this way I have limited Logstash total the memory usage to 620 MB maximum.
Logstash pipeline configurations (input/filter/output) can be optimized using the methods mentioned here.
In this way I asserted whether my Logstash filter configurations are optimized.
Also pipeline input file configurations can be optimized using few properties below to ignore/close old log files as explained here, which will prevent unnecessary creation of pipeline threads.
ignore_older - in seconds - to totally ignore any file older than the given seconds
max_open_files - in numbers - to optimize the maximum number of opened files
close_older - in seconds to close the older files
exclude - array of unwanted file names (with or without wildcard)
In my case I was required to watch the recent files only and ignore the older files, and I have set the configuration accordingly as below:
input {
file {
#The application log path that will match with the rolling logs.
path => "c:/path/to/log/app-1.0-*.log"
#I didn't want logs older than an hour.
#If that older file gets updated with a new entry
#that will become the new file and the new entry will be read by Logstash
ignore_older => 3600
#I wanted to have only the very recent files to be watched.
#Since I am aware there won't be more then 5 files I set it to 5.
max_open_files => 5
#If the log file is not updated for 5 minutes close it.
#If any new entry gets added then it will be opened again.
close_older => 300
}
}
| Logstash | 48,576,637 | 13 |
I want to sync my MongoDB data to ElasticSearch, I read a lot of posts talking about elasticsearch river plugin and mongo connector, but all of them are deprecated for mongo 4 and elasticsearch 7!
As logstash is a proprietary software I would like to use it to sync both... Anyone knows how it's possible to do it?
| You may sync MongoDB and Elasticsearch with Logstash; syncing is, in fact, one of the major applications of Logstash. After installing Logstash, all that you need to do is specify a pipeline for your use case: one or more input sources (MongoDB in your case) and one or more output sinks (Elasticsearch in your case), put as a config file (example follows) inside Logstash's pipeline directory; Logstash takes care of the rest.
Logstash officially provides plugins for a lot of commonly used data sources and sinks; those plugins let you read data from and write data to various sources with just a few configuration. You just need to find the right plugin, install it, and configure it for your scenario. Logstash has an official output plugin for Elasticsearch and its configuration is pretty intuitive. Logstash, however, doesn't provide any input plugin for MongoDB. You need to find a third party one; this one seems to be pretty promising.
In the end, your pipeline may look something like the following:
input {
mongodb {
uri => 'mongodb://10.0.0.30/my-logs?ssl=true'
placeholder_db_dir => '/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'events_'
batch_size => 5000
}
}
output {
stdout {
codec => rubydebug #outputs the same thing as elasticsearch in stdout to facilitate debugging
}
elasticsearch {
hosts => "localhost:9200"
index => "target_index"
document_type => "document_type"
document_id => "%{id}"
}
}
| Logstash | 56,628,675 | 13 |
I am trying out the ELK to visualise my log file. I have tried different setups:
Logstash file input plugin https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html
Logstash Beats input plugin https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html with Filebeat Logstash output https://www.elastic.co/guide/en/beats/filebeat/current/logstash-output.html
Filebeat Elasticsearch output https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html
Can someone list out their differences and when to use which setup? If it is not for here, please point me to the right place like Super User or DevOp or Server Fault.
| 1) To use logstash file input you need a logstash instance running on the machine from where you want to collect the logs, if the logs are on the same machine that you are already running logstash this is not a problem, but if the logs are on remote machines, a logstash instance is not always recommended because it needs more resources than filebeat.
2 and 3) For collecting logs on remote machines filebeat is recommended since it needs less resources than a logstash instance, you would use the logstash output if you want to parse your logs, add or remove fields or make some enrichment on your data, if you don't need to do anything like that you can use the elasticsearch output and send the data directly to elasticsearch.
This is the main difference, if your logs are on the same machine that you are running logstash, you can use the file input, if you need to collect logs from remote machines, you can use filebeat and send it to logstash if you want to make transformations on your data, or send directly to elasticsearch if you don't need to make transformations on your data.
Another advantage of using filebeat, even on the logstash machine, is that if your logstash instance is down, you won't lose any logs, filebeat will resend the events, using the file input you can lose events in some cases.
| Logstash | 58,585,855 | 13 |
I've got log lines in the following format and want to extract fields:
[field1: content1] [field2: content2] [field3: content3] ...
I neither know the field names, nor the number of fields.
I tried it with backreferences and the sprintf format but got no results:
match => [ "message", "(?:\[(\w+): %{DATA:\k<-1>}\])+" ] # not working
match => [ "message", "(?:\[%{WORD:fieldname}: %{DATA:%{fieldname}}\])+" ] # not working
This seems to work for only one field but not more:
match => [ "message", "(?:\[%{WORD:field}: %{DATA:content}\] ?)+" ]
add_field => { "%{field}" => "%{content}" }
The kv filter is also not appropriate because the content of the fields may contain whitespaces.
Is there any plugin / strategy to fix this problem?
| Logstash Ruby Plugin can help you. :)
Here is the configuration:
input {
stdin {}
}
filter {
ruby {
code => "
fieldArray = event['message'].split('] [')
for field in fieldArray
field = field.delete '['
field = field.delete ']'
result = field.split(': ')
event[result[0]] = result[1]
end
"
}
}
output {
stdout {
codec => rubydebug
}
}
With your logs:
[field1: content1] [field2: content2] [field3: content3]
This is the output:
{
"message" => "[field1: content1] [field2: content2] [field3: content3]",
"@version" => "1",
"@timestamp" => "2014-07-07T08:49:28.543Z",
"host" => "abc",
"field1" => "content1",
"field2" => "content2",
"field3" => "content3"
}
I have try with 4 fields, it also works.
Please note that the event in the ruby code is logstash event. You can use it to get all your event field such as message, @timestamp etc.
Enjoy it!!!
| Logstash | 24,605,331 | 12 |
I am just getting started with Logstash and wondering whether there is a way to specify block comments in logstash config file?
It will be very useful as I am testing with long grok patterns which span multiple lines.
| Currently there is no block comments available . We have to go for single line comment only by using # at the start of line which you need to comment out.
Also comments can be used anywhere in the line
# this is a comment
input { # comments can appear at the end of a line, too
# ...
}
| Logstash | 31,249,588 | 12 |
I am working on a solution for centralized log file aggregation from our CentOs 6.x servers. After installing Elasticsearch/Logstash/Kibana (ELK) stack I came across an Rsyslog omelasticsearch plugin which can send messages from Rsyslog to Elasticsearch in logstash format and started asking myself why I need Logstash.
Logstash has a lot of different input plugins including the one accepting Rsyslog messages. Is there a reason why I would use Logstash for my use case where I need to gather the content of logs files from multiple servers? Also, is there a benefit of sending messages from Rsyslog to Logstash instead of sending them directly to Elasticsearch?
| I would use Logstash in the middle if there's something I need from it that rsyslog doesn't have. For example, getting GeoIP from an IP address.
If, on the other hand, I would need to get syslog or file contents indexed in Elasticsearch, I'd use rsyslog directly. It can do buffering (disk+memory), filtering, you can choose how the document will look like (you can put the textual severity instead of the number, for example), and it can parse unstructured data. But the main advantage is performance, on which rsyslog is focused on. Here's a presentation with some numbers (and tips and tricks) on Logstash, rsyslog and Elasticsearch:
http://blog.sematext.com/2015/05/18/tuning-elasticsearch-indexing-pipeline-for-logs/
| Logstash | 32,124,984 | 12 |
I have a logfile which looks like this ( simplified)
Logline sample
MyLine data={"firstname":"bob","lastname":"the builder"}
I'd like to extract the json contained in data and create two fields, one for firstname, one for last. However, the ouput i get is this:
{"message":"Line data={\"firstname\":\"bob\",\"lastname\":\"the builder\"}\r","@version":"1","@timestamp":"2015-11-26T11:38:56.700Z","host":"xxx","path":"C:/logstashold/bin/input.txt","MyWord":"Line","parsedJson":{"firstname":"bob","lastname":"the builder"}}
As you can see
..."parsedJson":{"firstname":"bob","lastname":"the builder"}}
That's not what I need, I need to create fields for firstname and lastname in kibana, but logstash isn't extracting the fields out with the json filter.
LogStash Config
input {
file {
path => "C:/logstashold/bin/input.txt"
}
}
filter {
grok {
match => { "message" => "%{WORD:MyWord} data=%{GREEDYDATA:request}"}
}
json{
source => "request"
target => "parsedJson"
remove_field=>["request"]
}
}
output {
file{
path => "C:/logstashold/bin/output.txt"
}
}
Any help greatly appreciated, I'm sure I'm missing out something simple
Thanks
| After your json filter add another one called mutate in order to add the two fields that you would take from the parsedJson field.
filter {
...
json {
...
}
mutate {
add_field => {
"firstname" => "%{[parsedJson][firstname]}"
"lastname" => "%{[parsedJson][lastname]}"
}
}
}
For your sample log line above that would give:
{
"message" => "MyLine data={\"firstname\":\"bob\",\"lastname\":\"the builder\"}",
"@version" => "1",
"@timestamp" => "2015-11-26T11:54:52.556Z",
"host" => "iMac.local",
"MyWord" => "MyLine",
"parsedJson" => {
"firstname" => "bob",
"lastname" => "the builder"
},
"firstname" => "bob",
"lastname" => "the builder"
}
| Logstash | 33,937,936 | 12 |
I believe it is simple question but I still do not get it from Docker-compose documentations. What is the difference between links and external_links?
I like external_links as I want to have core docker-compose and I want to extend it without overriding the core links.
What exactly I have, I am trying to setup logstash which depends on the elasticsearch. Elasticsearch is in the core docker-compose and the logstash is in the depending one. So I had to define the elastic search in the depended docker-compose as a reference as logstash need it as a link. BUT Elasticsearch has already its own links which I do not want to repeat them in the dependent one.
Can I do that with external_link instead of link?
I know that links will make sure that the link is up first before linking, does the external_link will do the same?
Any help is appreciated. Thanks.
| Use links when you want to link together containers within the same docker-compose.yml. All you need to do is set the link to the service name. Like this:
---
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200:9200"
logstash:
image: logstash:latest
command: logstash -f logstash.conf
ports:
- "5000:5000"
links:
- elasticsearch
If you want to link a container inside of the docker-compose.yml to another container that was not included in the same docker-compose.yml or started in a different manner then you can use external_links and you would set the link to the container's name. Like this:
---
logstash:
image: logstash:latest
command: logstash -f logstash.conf
ports:
- "5000:5000"
external_links:
- my_elasticsearch_container
I would suggest the first way unless your use case for some reason requires that they cannot be in the same docker-compose.yml
| Logstash | 35,154,441 | 12 |