Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

configuring filebeat with elk-docker image #70

Closed
raiusa opened this issue Sep 27, 2016 · 47 comments
Closed

configuring filebeat with elk-docker image #70

raiusa opened this issue Sep 27, 2016 · 47 comments

Comments

@raiusa
Copy link

raiusa commented Sep 27, 2016

Firstly, I would like to let you know that this is not an issue with your elk-docker image but it's problem which I am facing while trying to use filebeat docker image. I created a docker container of elk based on image and ran successfully. I built a separate image for filebeat and deployed on separate machine other then elk server. I am struggling to figure out what should I put under logstash output host field in filebeat configuration file as elk images is deployed by mesosphere and don't know the ip address.

@spujadas
Copy link
Owner

I'm unfamiliar with Mesosphere in particular but generally speaking the feature you're looking for is service discovery (https://docs.mesosphere.com/1.8/usage/service-discovery/mesos-dns/).
If Filebeat is also containerised, then from within your cluster if you/Marathon created the ELK container with the 'DNS' name elk.marathon.mesos, then that's the name you'd need to put into Filebeat.
From an instance of Filebeat running outside the cluster it looks as though you can work out the IP address of the ELK container using Mesosphere's REST API, as explained here: https://docs.mesosphere.com/1.8/usage/service-discovery/dns-naming/

Hope this points you in the right direction, otherwise you may want to check in with the Mesosphere community who will most certainly be able to help you with this one.

@raiusa
Copy link
Author

raiusa commented Sep 28, 2016

Thank you so much for your response and certainly it helps me a lot. I think, I should talk to Mesosphere team and find out how DNS name get assigned when Marathon creates a ELK container. I don't see any option where I can provide DNS while containerizing an ELK in Mesosphere.
Again your input is valuable and thanks for that.

@raiusa
Copy link
Author

raiusa commented Sep 30, 2016

Finally with the help of Mesosphere team, I am able to find out how Mesosphere discovery service works and update the logstash and filebeat configuration files. But now getting unauthorized error when tried to deploy the elk docker image through Marathon. I run the same docker image with docker command (sudo docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -p 5000:5000 -it elk ) without any issue. I am wondering without using -v option (volume) in docker run command, how all the configuration files get available in container? Based on my understanding, to have all configurations file in container, have to create volume and thinking the unauthorized error which I am getting on Marathon due to cert.
Any help?

@spujadas
Copy link
Owner

This section of the documentation explains how to use your own configuration files: http://elk-docker.readthedocs.io/#updating-logstash-configuration

@raiusa
Copy link
Author

raiusa commented Sep 30, 2016

Thank you so much for quick reply.
I got that. But what about those configuration files which I haven't changed like logstash-beats.crt and key and logstash-forward.crt and key.

@raiusa
Copy link
Author

raiusa commented Sep 30, 2016

And other thing, I noticed that in docker files, you have added elasticsearch , logstash and Kibana group. Do I need to make any changes on that as I am trying to run under my user?

@spujadas
Copy link
Owner

spujadas commented Oct 1, 2016

I got that. But what about those configuration files which I haven't changed like logstash-beats.crt and key and logstash-forward.crt and key.

Sorry I'm not sure I understand the question. If you've simply extended the image or bind-mounted your additional config files then the non-overwritten ones (including the *.{crt,key} files will still be there as usual.

And other thing, I noticed that in docker files, you have added elasticsearch , logstash and Kibana group. Do I need to make any changes on that as I am trying to run under my user?

In general, that shouldn't be necessary. These are the users that the various services are running as within the running container, but they're unrelated to the user you're starting the container with (if I understand your question correctly).
(However, if you're bind-mounting volumes containing files that one of the services needs access to, then you need to make sure that the files on the host are readable/writable by the UID/GID that the relevant service is running as.)

If the above doesn't help, could you check that you can run a (default, then extended as per your needs) ELK container and feed it some logs using Filebeat on a vanilla Linux?
I'm having a hard time separating what could be due to the image not working properly and what could be Mesosphere-specific, so if we could have a working baseline, that would help me direct you towards the best source of help.

@raiusa
Copy link
Author

raiusa commented Oct 3, 2016

Thank you so much for your response . Your suggestion and clarification are helping me a lot. Based on your suggestion, first tried to run default elk image without Mesohphere and got running then extended custom configurations (30-output.conf and gork-pattern.conf) and got running too on a vanilla Linux. Once extended image worked on a vanilla box, I tried on Meshosphere and guess what , deployed successfully without any issue (great relief LOL). I don't know what was doing wrong earlier but this time elk container has deployed and started successfully in Meso.

After pushing some logs, tried to access the Kibana (http://localhost:5601) but got not found error.
So I tried the same on a vanilla box and what I found, when I start container with command docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -p 5000:5000 -it elk , got the Kibana page but when start with docker run elk , got not found error.
PS: The start.sh file does exist under /usr/local/bin/start.sh.
Any thought?

@spujadas
Copy link
Owner

spujadas commented Oct 3, 2016

Great to hear that part of the set-up is now working with Meso!

As far as accessing Kibana is concerned: is your Meso hosted locally? If not you'll need to access it using the proper IP or DNS name rather than localhost.
The reason why you can't access Kibana when running docker run elk is that you need to expose the ports (5601 for Kibana, 9200 for ES, etc. – see the image's documentation for more info) for them to be visible from your host.
That may also be the same issue with your Meso set-up: the ports that are accessible from outside Meso must be exposed (so typically, at the very least: 5601 for Kibana, and 5044 for Logstash's Beats interface).

@raiusa
Copy link
Author

raiusa commented Oct 3, 2016

I always appreciate your help. Based on my understanding, if we have mentioned the EXPOSE command in docker file, container should pick form there. Do we need to set some flag in docker file to automatically map the port in container as I don't find any option in Meso where I can map. I will try to reach out to Meso team to figure out this option in Meso. Can you please point out the document where it has mentioned?

@spujadas
Copy link
Owner

spujadas commented Oct 3, 2016

Oops, my mistake, I meant 'publish(ed)', not 'expose(d)' (you're right the port is already EXPOSEd by the Dockerfile, which means it's visible from other containers managed by the instance of Docker, but it needs to be published using docker's -p option or equivalent in order to be visible from the "outside").

In Meso my best guess is that the answer is somewhere on this page, but as always the Meso team will be able to help you out as I've never used Meso myself.

@raiusa
Copy link
Author

raiusa commented Oct 5, 2016

Thank you. Still struggling and working with Mesosphere team to find out the equivalent config as of docker -p. But in mean while I am trying to test every thing in vanilla Linux (so manually running docker image on few machines in cluster). After manually pushing some data in elasticsearch, tried to visualize the kibana but didn't get through. After going through the kibana logs, I found following log:{"type":"log","@timestamp":"2016-10-05T17:22:52Z","tags":["warning","elasticsearch"],"pid":230,"message":"Unable to revive connection: http://localhost:9200/"}
Do you think, I should make some change in Kibana config (Kibana.yml)? As of now it's using default config.

@spujadas
Copy link
Owner

spujadas commented Oct 6, 2016

Very strange indeed… the default image on a vanilla Linux should work out of the box.

From what I read, the Unable to revive connection could be due to a variety of reasons, including networking errors. In particular, if you're running a cluster, make sure that Kibana is pointing to the DNS/IP of Elasticsearch if not running locally; make sure that you can curl/browse to http://localhost:9200/ (especially from the host running Kibana).

You may want also to check out https://discuss.elastic.co/c/elasticsearch (for instance the very recently opened post here: https://discuss.elastic.co/t/unable-to-revive-connection-errors-when-running-es-in-docker-container/62374) for suggestions.

@raiusa
Copy link
Author

raiusa commented Oct 13, 2016

With the help of Mesosphere team, I am able to resolve above issues and finally, ELK server is up and running in Mesosphere . So I moved one step further but not yet done as I don't see the logs in Kibana. It seems like filebeat is not shipping logs to logstash. See the docker logs of filebeat container below.
It seems Filebeat doesn't communicate with logstash due to certificate error. Based on my understanding, the base docker image has all certs (logstash-forwarder.key ,logstash-forwarder.cert, logstash-beats.crt and logstash-beats.key) in docker files and if I haven't overriden those files, container will use the base cert (default with elk image). First question is do I need to generate my own cert and key and override those ? Also, I have pushed newly created cert of elk server in my own filebeat container. Do I need to match up filebeat and elk server cert?
Hope I able to explain my issue if not then please let me know?
any thought?

2016/10/13 19:56:40.483005 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
2016/10/13 19:56:40.483681 logstash.go:106: INFO Max Retries set to: 3
2016/10/13 19:56:40.495260 transport.go:125: ERR SSL client failed to connect with: x509: certificate is valid for *, not elk.marathon.mesos
2016/10/13 19:56:40.495285 outputs.go:126: INFO Activated logstash as output plugin.
2016/10/13 19:56:40.495351 publish.go:288: INFO Publisher name: 2928cc769074
2016/10/13 19:56:40.495637 async.go:78: INFO Flush Interval set to: 1s
2016/10/13 19:56:40.495659 async.go:84: INFO Max Bulk Size set to: 1024
2016/10/13 19:56:40.495721 beat.go:168: INFO Init Beat: filebeat; Version: 1.3.0
2016/10/13 19:56:40.496351 beat.go:194: INFO filebeat sucessfully setup. Start running.
2016/10/13 19:56:40.496489 registrar.go:68: INFO Registry file set to: /var/lib/ filebeat/registry
2016/10/13 19:56:40.496712 prospector.go:133: INFO Set ignore_older duration to 5m0s
2016/10/13 19:56:40.496725 prospector.go:133: INFO Set close_older duration to 1 h0m0s
2016/10/13 19:56:40.496732 prospector.go:133: INFO Set scan_frequency duration t o 10s
2016/10/13 19:56:40.496738 prospector.go:93: INFO Input type set to: log
2016/10/13 19:56:40.496741 prospector.go:133: INFO Set backoff duration to 1s
2016/10/13 19:56:40.496748 prospector.go:133: INFO Set max_backoff duration to 1 0s
2016/10/13 19:56:40.496752 prospector.go:113: INFO force_close_file is disabled
2016/10/13 19:56:40.496803 prospector.go:143: INFO Starting prospector of type: log
2016/10/13 19:56:40.496887 spooler.go:77: INFO Starting spooler: spool_size: 204 8; idle_timeout: 5s
2016/10/13 19:56:40.496927 crawler.go:78: INFO All prospectors initialised with 0 states to persist
2016/10/13 19:56:40.496941 registrar.go:87: INFO Starting Registrar
2016/10/13 19:56:40.496958 publish.go:88: INFO Start sending events to output

@spujadas
Copy link
Owner

Glad to hear that things are starting to work!

As far as certificates are concerned, I suppose you've already read this part of the documentation.
In your situation, your ELK container has been assigned the hostname elk.marathon.mesos, so you need to:

  • Generate a private key (the logstash-beats.key file) and a certificate (logstash-beats.crt) for either elk.marathon.mesos or *.marathon.mesos (as you prefer – the * option may be better if you want to have an ELK cluster later down the line).
  • Overwrite the corresponding files in the ELK image.
  • Configure Filebeats to trust this updated logstash-beats.crt file (i.e. use the certificate_authorities: section to point to this file).

And that should be it!

@raiusa
Copy link
Author

raiusa commented Oct 14, 2016

thanks for reply. One quick clarification: do I need to overwrite logstash-forwarder.key and logstash-forwarder.cert or logstash-beats.key and logstash-beats.crt one? I am using filebeat as shipper.

@spujadas
Copy link
Owner

You only need to overwrite the logstash-beats.* files, you can safely ignore the logstash-forwarder.* ones.

@raiusa
Copy link
Author

raiusa commented Oct 25, 2016

Hi
Back again after gap of few days. Very quick question: does logstash of sebp/elk image support Kafka input plugin?
Thanks
Rai
Sent from my iPhone

On Oct 14, 2016, at 1:45 PM, Sébastien Pujadas [email protected] wrote:

You only need to overwrite the logstash-beats.* files, you can safely ignore the logstash-forwarder.* ones.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.

@spujadas
Copy link
Owner

Sure thing, this image is just a way to package the ELK stack, so anything that can be done with the ELK services can be done with the image.
Specifically, see https://elk-docker.readthedocs.io/#tweaking-image on how to install plugins, and #54 for an example with Kafka.

@raiusa
Copy link
Author

raiusa commented Oct 26, 2016

Thank you as always. so after tweaking the image, tried to run my elk-kafka image but I got follwoing error from logstash.log
"Unknown setting 'bootstrap_servers' for kafka", :level=>:error

here is my kafka-input config:
input {
kafka {
topic_id => "elk"
bootstrap_servers => "xxx.xx.xxx.xxx:9241,xxx.xx.xxx.xxx::9331,xxx.xx.xxx.xxx:9582"
type => "kafka-input"
}
}

Tweak docker file
FROM sebp/elk
ADD ./kafka-input.conf /etc/logstash/conf.d/kafka-input.conf
ADD ./30-output.conf /etc/logstash/conf.d/30-output.conf
ADD ./mesos-grok-pattern.pattern ${LOGSTASH_HOME}/patterns/mesos-grok-pattern
WORKDIR ${LOGSTASH_HOME}
RUN gosu logstash bin/logstash-plugin install logstash-input-kafka
RUN cd /etc/logstash/conf.d/
&& rm -f 01-lumberjack-input.conf 02-beats-input.conf 10-syslog.conf 11-nginx.conf

Something I am not doing right. any help

@raiusa
Copy link
Author

raiusa commented Oct 26, 2016

I forgot to mention that I have installed kafka-input plugin from here

@spujadas
Copy link
Owner

Unfortunately I can't help you with the actual usage of the ELK stack (I'm only "responsible" for packaging the image), so I'm going to redirect you to https://discuss.elastic.co/ for guidance on how to configure ELK and the components surrounding it (including plugins).

@raiusa
Copy link
Author

raiusa commented Nov 1, 2016

Hi,
Can you please guide me about how can I update your image to installed 5.0.0 elk ?
Thanks
Manoj

@spujadas
Copy link
Owner

spujadas commented Nov 2, 2016

From a Docker perspective, simply pull the latest or es500_l500_k500 version of the image and then start the container as usual.
From an ELK perspective, see https://www.elastic.co/guide/en/elastic-stack/current/upgrading-elastic-stack.html for guidance on how to upgrade.

@raiusa
Copy link
Author

raiusa commented Nov 2, 2016

If I am correct, in docker file, you have mentioned the version which is hard coded like in case of logstash it is 2.4. I am trying to understand how it will pull latest.

Sent from my iPhone

On Nov 2, 2016, at 3:57 AM, Sébastien Pujadas [email protected] wrote:

From a Docker perspective, simply pull the latest or es500_l500_k500 version of the image and then start the container as usual.
From an ELK perspective, see https://www.elastic.co/guide/en/elastic-stack/current/upgrading-elastic-stack.html for guidance on how to upgrade.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.

@spujadas
Copy link
Owner

spujadas commented Nov 2, 2016

The master branch and the es500_l500_k500 tag use version 5.0.0 of the ELK stack (see Dockerfile of master branch and Dockerfile with es500_l500_k500 tag).
The latest version of the sebp/elk image is built from the master branch, so pulling the latest image will give you version 5.0.0 of the ELK stack.

@raiusa
Copy link
Author

raiusa commented Nov 2, 2016

thank you as always. Just one quick question.If I create my own image based on yours (FROM sebp/elk), does it pull the latest from master branch?

@spujadas
Copy link
Owner

spujadas commented Nov 2, 2016

Yes by default it extends the latest, but you can specify a specific tag to override this (e.g. FROM sebp/elk:es500_l500_k500).

@raiusa
Copy link
Author

raiusa commented Nov 3, 2016

With your help, my complete elk stack with pipeline filebeat->logstash->elasticsearch->kibana is up and running in Mesosphere and thank you so much for that. Now I am trying with different pipeline filebeat->Kafka->logstash->elasticsearch->kibana and that's where I am using elk 5.0.0 version.
After adding the tag es500_l500_k500, it pulls 5.0.0 version of the ELK stack. So that part has resolved and elk is up and running with kafka input plugin. But now I am start getting some different error which I never had with previous version of elk stack:
I don't know whether it's belong to image issue but thought to ask you as you have always been guiding me in right direction to resolve the issue.
The following error log is from logstash.log:

[2016-11-03T14:39:26,640][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["elk.marathon.mesos:9200"]}
[2016-11-03T14:39:26,641][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inf
light"=>1500}
[2016-11-03T14:39:26,646][INFO ][logstash.pipeline ] Pipeline main started
[2016-11-03T14:39:26,667][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2016-11-03T14:39:31,744][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>["http://elk.marathon.mesos:9200"], :added=>[]}}
[2016-11-03T14:39:36,746][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available co
nnections>}
[2016-11-03T14:39:36,747][WARN ][logstash.outputs.elasticsearch] Elasticsearch output attempted to sniff for new connections but cannot. No living connections are detected. Pool contains th
e following current URLs {:url_info=>{}}
[2016-11-03T14:39:41,748][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available co
nnections>}

and also there is some error with logrotate:

Caused by: java.lang.IllegalStateException: ManagerFactory [org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory@2c2252c8] unable to create manager fo
r [/var/log/logstash/logstash.log/logstash-plain.log] with data [org.apache.logging.log4j.core.appender.rolling.RollingFileManager$FactoryData@c69fab[pattern=/var/log/logstash/logstash.log/
logstash-plain-%d{yyyy-MM-dd}.log, append=true, bufferedIO=true, bufferSize=8192, policy=CompositeTriggeringPolicy(policies=[TimeBasedTriggeringPolicy(nextRolloverMillis=0, interval=1, modu
late=true)]), strategy=DefaultRolloverStrategy(min=1, max=7), advertiseURI=null, layout=org.apache.logging.log4j.core.layout.JsonLayout@5e4eb85b]]
at org.apache.logging.log4j.core.appender.AbstractManager.getManager(AbstractManager.java:75)
at org.apache.logging.log4j.core.appender.OutputStreamManager.getManager(OutputStreamManager.java:81)
at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.getFileManager(RollingFileManager.java:103)
at org.apache.logging.log4j.core.appender.RollingFileAppender.createAppender(RollingFileAppender.java:191)
... 101 more

@spujadas
Copy link
Owner

spujadas commented Nov 3, 2016

Great to hear that things are or have been working!

Regarding the error with Logstash, my first guess is that it's not related to the image, might be a breaking change in v5.0.0 so best head over to https://discuss.elastic.co/ for guidance.

As far as log rotation is concerned, I'll have to give it a closer look (FYI there's an open issue – namely #63 – regarding logrotate and Logstash, which is possibly not specific to the image; might be related to what you're seeing).
Could you confirm that this is due to logrotate? From the logs it seems that it's log4j that is complaining, so just want to make sure that you're seeing this after the logs have been rotated.
Could you also cross-check your Logstash configuration for logging? The /var/log/logstash/logstash.log/logstash-plain.log in the logs seems suspicious.

@raiusa
Copy link
Author

raiusa commented Nov 3, 2016

I thought so. I have already posted my issue on elastic forum.
Here is complete log of logstash.log other than connection issue.

2016-11-03 14:39:26,121 main ERROR Unable to create file /var/log/logstash/logstash.log/logstash-plain.log java.io.IOException: Not a directory
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1012)
at org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory.createManager(RollingFileManager.java:421)
at org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory.createManager(RollingFileManager.java:403)
at org.apache.logging.log4j.core.appender.AbstractManager.getManager(AbstractManager.java:73)
at org.apache.logging.log4j.core.appender.OutputStreamManager.getManager(OutputStreamManager.java:81)
at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.getFileManager(RollingFileManager.java:103)
at org.apache.logging.log4j.core.appender.RollingFileAppender.createAppender(RollingFileAppender.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:203)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:326)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:170)
at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
at org.jruby.ast.RescueNode.executeBody(RescueNode.java:221)
at org.jruby.ast.RescueNode.interpret(RescueNode.java:116)
at org.jruby.ast.BeginNode.interpret(BeginNode.java:83)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:225)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:219)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:346)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:204)
at opt.logstash.lib.bootstrap.environment.file(/opt/logstash/lib/bootstrap/environment.rb:68)
at opt.logstash.lib.bootstrap.environment.load(/opt/logstash/lib/bootstrap/environment.rb)
at org.jruby.Ruby.runScript(Ruby.java:857)
at org.jruby.Ruby.runScript(Ruby.java:850)
at org.jruby.Ruby.runNormally(Ruby.java:729)
at org.jruby.Ruby.runFromMain(Ruby.java:578)
at org.jruby.Main.doRunFromMain(Main.java:393)
at org.jruby.Main.internalRun(Main.java:288)
at org.jruby.Main.run(Main.java:217)
at org.jruby.Main.main(Main.java:197)

2016-11-03 14:39:26,124 main ERROR Unable to invoke factory method in class class org.apache.logging.log4j.core.appender.RollingFileAppender for element RollingFile. java.lang.reflect.Invoc
ationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:132)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:918)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:858)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:850)
at org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:479)
at org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:219)
at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:231)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:187)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:306)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:225)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:219)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:346)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:204)
at opt.logstash.lib.bootstrap.environment.file(/opt/logstash/lib/bootstrap/environment.rb:68)
at opt.logstash.lib.bootstrap.environment.load(/opt/logstash/lib/bootstrap/environment.rb)
at org.jruby.Ruby.runScript(Ruby.java:857)
at org.jruby.Ruby.runScript(Ruby.java:850)
at org.jruby.Ruby.runNormally(Ruby.java:729)
at org.jruby.Ruby.runFromMain(Ruby.java:578)
at org.jruby.Main.doRunFromMain(Main.java:393)
at org.jruby.Main.internalRun(Main.java:288)
at org.jruby.Main.run(Main.java:217)
at org.jruby.Main.main(Main.java:197)
Caused by: java.lang.IllegalStateException: ManagerFactory [org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory@2c2252c8] unable to create manager fo
r [/var/log/logstash/logstash.log/logstash-plain.log] with data [org.apache.logging.log4j.core.appender.rolling.RollingFileManager$FactoryData@c69fab[pattern=/var/log/logstash/logstash.log/
logstash-plain-%d{yyyy-MM-dd}.log, append=true, bufferedIO=true, bufferSize=8192, policy=CompositeTriggeringPolicy(policies=[TimeBasedTriggeringPolicy(nextRolloverMillis=0, interval=1, modu
late=true)]), strategy=DefaultRolloverStrategy(min=1, max=7), advertiseURI=null, layout=org.apache.logging.log4j.core.layout.JsonLayout@5e4eb85b]]
at org.apache.logging.log4j.core.appender.AbstractManager.getManager(AbstractManager.java:75)
at org.apache.logging.log4j.core.appender.OutputStreamManager.getManager(OutputStreamManager.java:81)
at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.getFileManager(RollingFileManager.java:103)
at org.apache.logging.log4j.core.appender.RollingFileAppender.createAppender(RollingFileAppender.java:191)
... 101 more

@spujadas
Copy link
Owner

spujadas commented Nov 3, 2016

OK, now I see it, it's an issue with the image: Logstash's -l switch has changed in version 5. Its argument used to be a file, it's now a directory, hence ERROR Unable to create file /var/log/logstash/logstash.log/logstash-plain.log java.io.IOException: Not a directory.
Will update the image accordingly as soon as I can.

(Tracking this specific issue in #81)

@raiusa
Copy link
Author

raiusa commented Nov 3, 2016

Can you please inform me once you make that changes. I am assuming this error has nothing do with connection issue?

@spujadas
Copy link
Owner

spujadas commented Nov 3, 2016

Please use GitHub's subscribe button on #81 to be kept informed when the change is made.

@spujadas
Copy link
Owner

spujadas commented Nov 3, 2016

And your assumption is right, this is unrelated to the connection issue.

@spujadas
Copy link
Owner

spujadas commented Nov 4, 2016

[2016-11-03T14:39:41,748][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=># nnections>}

For some reason it appears that enabling the sniffing setting of Logstash's Elasticsearch output plugin causes this in version 5 (might need some tweaking of Elasticsearch's configuration to work; was working in earlier versions).
Anyway, I've disabled sniffing: the warning messages no longer appear, and the connection between Logstash and Elasticsearch seems to be more reliable.

I've done a couple of other updates and can confirm that everything is working end to end, from Filebeat to Kibana.

The updated image is currently being built (will be a few minutes): could you give it a shot once it's ready?

@raiusa
Copy link
Author

raiusa commented Nov 4, 2016

You are the best and Thant you so much for helping me out. I would love to try. Please let me know once you checked in. I think I will get an email as have already subscribed this

Sent from my iPhone

On Nov 4, 2016, at 4:07 PM, Sébastien Pujadas [email protected] wrote:

[2016-11-03T14:39:41,748][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=># nnections>}

For some reason it appears that the enabling sniffing setting of Logstash's Elasticsearch output plugin causes this in version 5 (might need some tweaking of Elasticsearch's configuration to work; was working in earlier versions).
Anyway, I've disabled sniffing: the warning messages no longer appear, and the connection between Logstash and Elasticsearch seems to be more reliable.
I've done a couple of other updates and can confirm that everything is now working end to end, from Filebeat to Kibana.

The updated image is currently being built (will be a few minutes): could you give it a shot once it's ready?


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.

@spujadas
Copy link
Owner

spujadas commented Nov 4, 2016

Thanks for the kind words!
The image has been built (https://hub.docker.com/r/sebp/elk/builds/), so you can pull it and give it a go.

@raiusa
Copy link
Author

raiusa commented Nov 5, 2016

Somehow it's not pulling the elk5.0.0. because now I am getting some kafka setting error which was working before es500_l500_k500 tag.
{:timestamp=>"2016-11-05T02:13:41.490000+0000", :message=>"Unknown setting 'topics' for kafka", :level=>:error}
{:timestamp=>"2016-11-05T02:13:41.493000+0000", :message=>"Unknown setting 'bootstrap_servers' for kafka", :level=>:error}
{:timestamp=>"2016-11-05T02:13:41.495000+0000", :message=>"fetched an invalid config", :config=>"output {\n elasticsearch {\n hosts => ["elk.marathon.mesos:9200"]\n sniffing => fal
se\n manage_template => false\n index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"\n document_type => "%{[@metadata][type]}"\n }\n}\n\ninput {\n kafka {\n topics => "el
k"\n bootstrap_servers => "###.###.###.###:9582"\n type => "kafka-input"\n }\n}\n", :reason=>"Something is wrong with your co
nfiguration.", :level=>:error}

Here is my Dockerfile:
FROM sebp/elk:latest
ADD ./kafka-input.conf /etc/logstash/conf.d/kafka-input.conf
ADD ./30-output.conf /etc/logstash/conf.d/30-output.conf
ADD ./mesos-grok-pattern.pattern ${LOGSTASH_HOME}/patterns/mesos-grok-pattern
WORKDIR ${LOGSTASH_HOME}
RUN gosu logstash bin/logstash-plugin install logstash-input-kafka
RUN cd /etc/logstash/conf.d/
&& rm -f 01-lumberjack-input.conf 02-beats-input.conf 10-syslog.conf 11-nginx.conf

@spujadas
Copy link
Owner

spujadas commented Nov 5, 2016

Can't help you there I'm afraid. Just tried using the image (both with the latest and es500_l500_k500 tags) on a fresh VM, everything's working fine with the complete ELK 5 stack.

@raiusa
Copy link
Author

raiusa commented Nov 5, 2016

So es500_l500_k500 tag has latest code too?

On Sat, Nov 5, 2016 at 12:52 AM, Sébastien Pujadas <[email protected]

wrote:

Can't help you there I'm afraid. Just tried using the image (both with the
latest and es500_l500_k500 on a fresh VM, everything's working fine with
the complete ELK 5 stack.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#70 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AVcQWlHcAkQ1OvUI_blr9rrOn8p-fqpJks5q7CeUgaJpZM4KIIYP
.

@spujadas
Copy link
Owner

spujadas commented Nov 5, 2016

latest is automatically build from the head of the master branch, and es500_l500_k500 was tagged to match the current working version of ELK version 5. So at this point, both tags point to ELK version 5.

@spujadas
Copy link
Owner

Closing this issue (the original problem was solved).

@raiusa
Copy link
Author

raiusa commented Jan 6, 2017 via email

@spujadas
Copy link
Owner

spujadas commented Jan 7, 2017

Nice to hear that you got everything working. Any specific tips you'd like to share that may be useful to others attempting to do the same?

As far as load balancing is concerned, my (very limited) understanding of Kafka is that it natively took care of load balancing requests from producers and consumers to brokers, so using an external load balancer would be at best redundant and at worst inefficient. But I could be terribly wrong so I'd strongly recommend approaching someone that is actually knowledgeable about Kafka for guidance.

@raiusa
Copy link
Author

raiusa commented Feb 3, 2017 via email

@spujadas
Copy link
Owner

spujadas commented Feb 3, 2017

No I haven't got an image for your use case: I only maintain this baseline image, and it's up to the users to extend it as needed.
In your case, to extend the image to add a Logstash plugin, see http://elk-docker.readthedocs.io/#installing-logstash-plugins for guidance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants