-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SUGGESTION] Adding a BLACKLIST pipeline #21
Comments
Please make that PR!!!! I had in mind adding this kind of integration later on (very far away), or wait for Elastic to add it. Anyway, I review it a litle bit and found several challenges:
Let us know what you think and what challenges do you see. Also what strategy do you plan to use. Again, many thanks for your contributions and ideas |
Hello,
Where BlackListIP.yml is something like :
Then we can use some field coloring and even scripted field to automate the analysis even further make extra correlation:
Threat aging and Backwards Lookup are really some great challenges due to the design of Elasticsearch that deep down it creates new documents instead of updating them. For threat aging it really depends on the consumer, their needs of Threat Intel and most importantly the size of their cluster because you can't just keep all the data. Each organization should define their needs in this matter and if we are being real, according to the pyramid of pain, IOCs are one of the most easily changeable data For the Backwards Lookup, I think we can use some fingerprinting with Logstash in order to avoid duplicates and use it to update the documents instead of creating new one since the challenge here is that an IP might be good today but after a while it might be malicious and instead of creating a new document we should update the first one by replacing the id of every ingested threat intel document by its fingerprint. This is something that was used in this project (https://git.deepaknadig.com/deepak/sdn-threat-intelligence/-/tree/master/)
and the output it like this :
The other key for this to work properly is to create the threat intel index under ecs index so that we can create scripted fields to correlate IOCs with the observers fields (source.ip, detination.ip, hash value, domain name...), Maybe we can then build our queries based on the date of the ingestion (not the creation of the ioc) and once its old enough with our ILM policy we make it warm than delete. (this theory definitly have some flaws but we should try and test everything).
Create a web application (FLASK, python or anything simple) to automate the upload and download and the ingestion of these tasks for SIEM engineers and Threat Intelligence Analysts like for example :
Basically this application is gonna replace the manual effort of creating yaml dictionaries or cron jobs that download CTI feeds. Tell me what you think of the validity of these actions |
wow! that looks super interesting. Mid-term solutions seems quite interesting. Just as a suggestion, I think we should work with the ingest processor instead of logstash dictionaries lookups. That way you could also visualize your threat intel data. Actually, next steps for the project are to move all lookups to enrich processors. About short-term, as I said, a threat intel database can go easly as big as 100k IPs, so I don´t know how a logstash lookup could impact performace, specially because firewalls logs can get very heavy (if you enable log-all on implicity deny rule for example). You can easly go above 2k EPS. It is worth to try it out though, let´s push the limits until it breaks. |
I have some general comments/questions: Does a dictionary file get updated in near real time when changed or does it require a deploy to take affect? What about SIEM detection rules and tripping on not only the Fortinet logs but rather any log that makes it into Elastic? A simple PowerShell or Python script in conjunction with some webhook functionality should be able to create these rules. I do like the idea of having an event tagged though because even then you could using the alerting when those tags trip. Good ideas all around. |
looks like these guys are on step ahead: https://www.youtube.com/watch?v=8yf9DJ_TO6o spcecilly when taking into consideration scalability. Blacklists can grow a lot, and we all know firewalls generate tons of logs as well. @nicpenning when you put new data into a dictionary, logs get enriched automatically with it. No need for restarting your logstash service. |
A small suggestion, if it is aligned with your vision of the project, is to enable people to add bad IPs to there events and modify the event.kind to alert once the bad IP is detected in order to raise it on the SIEM app.
This is specially beneficial for when you have multiple fortiXX instances or many other solution you can centralize your blacklist and enrich your logs even further in a nice and easy way. I can make PR if you want.
The text was updated successfully, but these errors were encountered: