Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logstash issues #48

Open
eallen-vaskywire opened this issue May 13, 2024 · 13 comments
Open

logstash issues #48

eallen-vaskywire opened this issue May 13, 2024 · 13 comments

Comments

@eallen-vaskywire
Copy link

no matter what I do I keep getting logstash Stopped with exit code 123, everything else seems to be working, I can see OSPF data however, the hostname import does not seem to remain after refresh or log out.

I am not sure what I am doing wrong with this. I have included the logs from logstash, any help with this would be greatly appreciated.

Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2024-05-13T20:27:56,579][INFO ][logstash.runner ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2024-05-13T20:27:56,596][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.17.0", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.13+8 on 11.0.13+8 +indy +jit [linux-x86_64]"}
[2024-05-13T20:27:56,600][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/]
[2024-05-13T20:27:56,675][INFO ][logstash.settings ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2024-05-13T20:27:56,695][INFO ][logstash.settings ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2024-05-13T20:27:57,384][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"9739962c-3a3b-4d4f-ac09-932c69a1c65b", :path=>"/usr/share/logstash/data/uuid"}
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sinatra-2.2.1/lib/sinatra/base.rb:931: warning: constant Tilt::Cache is deprecated
TimerTask timeouts are now ignored as these were not able to be implemented correctly
TimerTask timeouts are now ignored as these were not able to be implemented correctly
TimerTask timeouts are now ignored as these were not able to be implemented correctly
TimerTask timeouts are now ignored as these were not able to be implemented correctly
[2024-05-13T20:27:58,893][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
warning: thread "puma reactor (Ruby-0-Thread-4@puma reactor: :1)" terminated with exception (report_on_exception is true):
java.lang.NoSuchMethodError: 'void org.jruby.RubyThread.beforeBlockingCall(org.jruby.runtime.ThreadContext)'
at org.nio4r.Selector.doSelect(Selector.java:237)
at org.nio4r.Selector.select(Selector.java:197)
at org.nio4r.Selector$INVOKER$i$select.call(Selector$INVOKER$i$select.gen)
at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrOneBlock.call(JavaMethod.java:577)
at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:197)
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.puma_minus_5_dot_6_dot_8_minus_java.lib.puma.reactor.RUBY$method$select_loop$0(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/puma-5.6.8-java/lib/puma/reactor.rb:75)
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.puma_minus_5_dot_6_dot_8_minus_java.lib.puma.reactor.RUBY$method$select_loop$0$VARARGS(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/puma-5.6.8-java/lib/puma/reactor.rb:69)
at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80)
at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70)
at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207)
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.puma_minus_5_dot_6_dot_8_minus_java.lib.puma.reactor.RUBY$block$run$1(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/puma-5.6.8-java/lib/puma/reactor.rb:39)
at org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:138)
at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)
at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:52)
at org.jruby.runtime.Block.call(Block.java:139)
at org.jruby.RubyProc.call(RubyProc.java:318)
at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)
at java.base/java.lang.Thread.run(Thread.java:829)
[2024-05-13T20:28:00,280][FATAL][org.logstash.Logstash ]
java.lang.NoSuchMethodError: 'void org.jruby.RubyThread.beforeBlockingCall(org.jruby.runtime.ThreadContext)'
at org.nio4r.Selector.doSelect(Selector.java:237) ~[nio4r_ext.jar:?]
at org.nio4r.Selector.select(Selector.java:197) ~[nio4r_ext.jar:?]
at org.nio4r.Selector$INVOKER$i$select.call(Selector$INVOKER$i$select.gen) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrOneBlock.call(JavaMethod.java:577) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:197) ~[jruby-complete-9.2.20.1.jar:?]
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.puma_minus_5_dot_6_dot_8_minus_java.lib.puma.reactor.RUBY$method$select_loop$0(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/puma-5.6.8-java/lib/puma/reactor.rb:75) ~[?:?]
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.puma_minus_5_dot_6_dot_8_minus_java.lib.puma.reactor.RUBY$method$select_loop$0$VARARGS(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/puma-5.6.8-java/lib/puma/reactor.rb:69) ~[?:?]
at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207) ~[jruby-complete-9.2.20.1.jar:?]
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.puma_minus_5_dot_6_dot_8_minus_java.lib.puma.reactor.RUBY$block$run$1(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/puma-5.6.8-java/lib/puma/reactor.rb:39) ~[?:?]
at org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:138) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:52) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.runtime.Block.call(Block.java:139) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.RubyProc.call(RubyProc.java:318) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105) ~[jruby-complete-9.2.20.1.jar:?]
at java.lang.Thread.run(Thread.java:829) ~[?:?]

@Vadims06
Copy link
Owner

Vadims06 commented May 13, 2024

Hi @eallen-vaskywire ,
since it's a logstash issue I suppose that you are using OSPF Watcher, right? could you please share your logstash version? Uh, I see, version is 7.17.0

@eallen-vaskywire
Copy link
Author

yes for OSPF Watcher, sorry if I put this in the wrong thread/project. Logstash version is 7.17.0

@Vadims06
Copy link
Owner

I can suggest the following tests:
Start logstash container

[ospf-watcher]# docker run -it --rm --network=topolograph_backend --env-file=./.env -v ./logstash/pipeline:/usr/share/logstash/pipeline -v ./logstash/config:/usr/share/logstash/config logstash:7.17.0 /bin/bash

Inside container run this command:

bin/logstash -e 'input { stdin { } } filter { dissect { mapping => { "message" => "%{watcher_time},%{watcher_name},%{event_name},%{event_object},%{event_status},old_cost:%{old_cost},new_cost:%{new_cost},%{event_detected_by},%{subnet_type},%{shared_subnet_remote_neighbors_ids},%{graph_time}" }} mutate { update => { "[@metadata][mongo_collection_name]" => "adj_change" }} } output { stdout { codec  => rubydebug {metadata => true}} }'

It will expect an input from CLI, so copy and past this line of log

2023-01-01T00:00:00Z,demo-watcher,metric,10.1.14.0/24,changed,old_cost:10,new_cost:123,10.1.1.4,stub,10.1.1.4,01Jan2023_00h00m00s_7_hosts

The output should be:

[INFO ] 2024-05-13 21:15:25.462 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[INFO ] 2024-05-13 21:15:25.477 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
2023-01-01T00:00:00Z,demo-watcher,metric,10.1.14.0/24,changed,old_cost:10,new_cost:123,10.1.1.4,stub,10.1.1.4,01Jan2023_00h00m00s_7_hosts
{
                            "graph_time" => "01Jan2023_00h00m00s_7_hosts",
                     "event_detected_by" => "10.1.1.4",
                           "subnet_type" => "stub",
                               "message" => "2023-01-01T00:00:00Z,demo-watcher,metric,10.1.14.0/24,changed,old_cost:10,new_cost:123,10.1.1.4,stub,10.1.1.4,01Jan2023_00h00m00s_7_hosts",
                          "watcher_name" => "demo-watcher",
                          "watcher_time" => "2023-01-01T00:00:00Z",
                            "@timestamp" => 2024-05-13T21:15:50.628Z,
                              "old_cost" => "10",
                              "@version" => "1",
                                  "host" => "ba8ff3ab31f8",
                            "event_name" => "metric",
                              "new_cost" => "123",
    "shared_subnet_remote_neighbors_ids" => "10.1.1.4",
                          "event_object" => "10.1.14.0/24",
                          "event_status" => "changed"
}

@eallen-vaskywire
Copy link
Author

This is what I got.

Warning: no jvm.options file found.

Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2024-05-14 13:11:43.225 [main] runner - Starting Logstash {"logstash.version"=>"7.17.0", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.13+8 on 11.0.13+8 +jit [linux-x86_64]"}
[INFO ] 2024-05-14 13:11:43.237 [main] runner - JVM bootstrap flags: [-Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/]
[INFO ] 2024-05-14 13:11:43.257 [main] settings - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2024-05-14 13:11:43.260 [main] settings - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[WARN ] 2024-05-14 13:11:43.596 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2024-05-14 13:11:43.611 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"67c3443d-5beb-46bd-99ce-9319fb2fe283", :path=>"/usr/share/logstash/data/uuid"}
[INFO ] 2024-05-14 13:11:45.131 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[INFO ] 2024-05-14 13:11:45.826 [Converge PipelineAction::Create

] Reflections - Reflections took 149 ms to scan 1 urls, producing 119 keys and 417 values
[WARN ] 2024-05-14 13:11:46.233 [Converge PipelineAction::Create] line - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2024-05-14 13:11:46.242 [Converge PipelineAction::Create] stdin - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[INFO ] 2024-05-14 13:11:46.582 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["config string"], :thread=>"#<Thread:0x331a596a@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:129 run>"}
[INFO ] 2024-05-14 13:11:47.999 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>1.41}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.jrubystdinchannel.StdinChannelLibrary$Reader (file:/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jruby-stdin-channel-0.2.0-java/lib/jruby_stdin_channel/jruby_stdin_channel.jar) to field java.io.FilterInputStream.in
WARNING: Please consider reporting this to the maintainers of com.jrubystdinchannel.StdinChannelLibrary$Reader
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[INFO ] 2024-05-14 13:11:48.109 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[INFO ] 2024-05-14 13:11:48.145 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[WARN ] 2024-05-14 13:11:48.279 [[main]>worker3] Dissector - Dissector mapping, field found in event but it was empty {"field"=>"message", "event"=>{"tags"=>["_dissectfailure"], "@timestamp"=>2024-05-14T13:11:48.173Z, "message"=>"", "host"=>"876c8c81f84d", "@Version"=>"1"}}
{
"@timestamp" => 2024-05-14T13:11:48.173Z,
"host" => "876c8c81f84d",
"@Version" => "1",
"message" => "",
"tags" => [
[0] "_dissectfailure"
]
}

@Vadims06
Copy link
Owner

"message"=>""

It seems that you just pressed Enter, did you paste the following log 2023-01-01T00:00:00Z,demo-watcher,metric,10.1.14.0/24,changed,old_cost:10,new_cost:123,10.1.1.4,stub,10.1.1.4,01Jan2023_00h00m00s_7_hosts?

@eallen-vaskywire
Copy link
Author

after pasting the log I get this

[INFO ] 2024-05-15 14:35:39.236 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[INFO ] 2024-05-15 14:35:39.264 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
2023-01-01T00:00:00Z,demo-watcher,metric,10.1.14.0/24,changed,old_cost:10,new_cost:123,10.1.1.4,stub,10.1.1.4,01Jan2023_00h00m00s_7_hosts
{
"graph_time" => "01Jan2023_00h00m00s_7_hosts",
"event_detected_by" => "10.1.1.4",
"subnet_type" => "stub",
"message" => "2023-01-01T00:00:00Z,demo-watcher,metric,10.1.14.0/24,changed,old_cost:10,new_cost:123,10.1.1.4,stub,10.1.1.4,01Jan2023_00h00m00s_7_hosts",
"watcher_name" => "demo-watcher",
"watcher_time" => "2023-01-01T00:00:00Z",
"@timestamp" => 2024-05-15T14:35:58.542Z,
"old_cost" => "10",
"host" => "d93e5ce08076",
"@Version" => "1",
"event_name" => "metric",
"shared_subnet_remote_neighbors_ids" => "10.1.1.4",
"new_cost" => "123",
"event_object" => "10.1.14.0/24",
"event_status" => "changed"
}

@vrelk-net
Copy link

Try changing the Logstash version in the .env file to LOGSTASH_OSS_VERSION=7.17.21

I ran into the same issue and this resolved that error. I don't think it's a result of it, but that followed up with errors about Logstash being unable to reach Elasticsearch. The lines are still commented out in .env, but I'm still working on this to see if I can resolve it along with a few other issues.

@Vadims06
Copy link
Owner

Hi @eallen-vaskywire
was using of 7.17.21 version helpful for you?

@eallen-vaskywire
Copy link
Author

YES! Thank you so very much!!! I think we have finally got OSPFwatcher working, however for some reason it quits getting OSPF data after several hours and the only way we have found to get it working again is to reboot the stack.

There are a few bits that we are still trying to sort through like

  1. getting the right syntaxes and such for ELK (if you have some examples for this that would be amazing),
  2. Netbox integration for hostname population and resolution we currently use phpipam/powerdns not Netbox however I did find a plugin to sync PowerDNS-->Netbox and we are testing that out.

@Vadims06
Copy link
Owner

Thank you @vrelk-net for your advise and @eallen-vaskywire for confirming - I will set 7.17.21 as a default version.
I will be more than happy to help you make OSPF Watcher stable. Feel free to ask questions here or via admin at topolograph.com. Your feedback is extremely valuable.
According your questions:

  1. getting the right syntaxes and such for ELK is it fair to say that you are talking about a set of commands, which let to check that OSPF Watcher is healthy and help with troubleshooting? I have troubleshooting section in OSPFWatcher repo, but basically the first command is to get logs of failed container using docker logs watcher (could you please share logs of failed container?), if a container works - check that quagga/frr debug OSPF (check a command under troubleshooting section#1), etc. Probably I can provide more commands based on your inputs.
  2. I decided to deprecate Netbox support inside topolograph and expand such option via API. Netbox is developing quite fast and I can't keep up with changes :) So I can suggest export hostnames from your system and import them via CSV.

@vrelk-net
Copy link

@Vadims06 would it be possible for me to get access to the project that has the actual source for Topolograph? I'm hoping I might be able to work on a few things. One of which being the user system. (I sent an email last week, but haven't heard back. Figured I'd just add it here instead of creating an issue to request it.)

@Vadims06
Copy link
Owner

@vrelk-net , could you please elaborate on the user system a little bit more? Did you send an email to admin at topolograph? Sorry, I haven't received it. Yeah, I keep topolograph source code closed, because whoever contacted me to onboard new vendor into Topolograph and shared their LSDP for this - I keep LSDB output for my unit tests as is. I didn't anonymous it, but with restricted access. All in all, I will be glad to get your feedback and listen to your ideas.

@Vadims06
Copy link
Owner

can we close this issue?..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants