Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wazuh API - Invalid Credentials #1151

Open
mighty-services opened this issue Dec 14, 2023 · 6 comments
Open

Wazuh API - Invalid Credentials #1151

mighty-services opened this issue Dec 14, 2023 · 6 comments

Comments

@mighty-services
Copy link

I have followed your setup guide to a single node Server Instance just like in the docs. The only thi I added was the IP-Address of the Ubuntu 22.04 VM where Wazuh should reside.

After that, the installation went smoothly, and I can use the admin-Password at the end to log into the new dashboard via web browser and HTTPS. Right after that, there's a warning displayed:
image

the Wazuh API-Details show, that the API has invalid credentials:

INFO: Current API id [default]
INFO: Checking current API id [default]...
INFO: Current API id [default] has some problem: 3002 - Request failed with status code 403
INFO: Getting API hosts...
INFO: API hosts found: 1
INFO: Checking API host id [default]...
INFO: Could not connect to API id [default]: 3099 - ERROR3099 - Limit of login attempts reached. The current IP has been blocked due to a high number of login attempts
INFO: Removed [navigate] cookie
ERROR: No API available to connect

I didn't change these values at any time. The curl-command in the indexer-part worked fine with the password, the output gave at the end of the indexer-installation.

Another error seems to be in the "" section:

INFO: Index pattern id in cookie: yes [wazuh-alerts-*]
INFO: Getting list of valid index patterns...
INFO: Valid index patterns found: 1
INFO: Found default index pattern with title [wazuh-alerts-*]: yes
INFO: Checking the app default pattern exists: id [wazuh-alerts-*]...
INFO: Default pattern with id [wazuh-alerts-*] exists: yes
ACTION: Default pattern id [wazuh-alerts-*] set as default index pattern
INFO: Checking the index pattern id [wazuh-alerts-*] exists...
INFO: Index pattern id exists [wazuh-alerts-*]: yes
INFO: Index pattern id in cookie: yes [wazuh-alerts-*]
INFO: Checking if the index pattern id [wazuh-alerts-*] exists...
INFO: Index pattern id [wazuh-alerts-*] found: yes title [wazuh-alerts-*]
INFO: Checking if exists a template compatible with the index pattern title [wazuh-alerts-*]
INFO: Template found for the selected index-pattern title [wazuh-alerts-*]: yes
INFO: Index pattern id in cookie: [wazuh-alerts-*]
INFO: Getting index pattern data [wazuh-alerts-*]...
INFO: Index pattern data found: [yes]
INFO: Refreshing index pattern fields: title [wazuh-alerts-*], id [wazuh-alerts-*]...
ACTION: Refreshed index pattern fields: title [wazuh-alerts-*], id [wazuh-alerts-*]
INFO: Getting settings...
INFO: Check Wazuh dashboard setting [timeline:max_buckets]: 200000
INFO: App setting [timeline:max_buckets]: 200000
INFO: Settings mismatch [timeline:max_buckets]: no
INFO: Getting settings...
INFO: Check Wazuh dashboard setting [metaFields]: ["_source","_index"]
INFO: App setting [metaFields]: ["_source","_index"]
INFO: Settings mismatch [metaFields]: no
INFO: Getting settings...
INFO: Check Wazuh dashboard setting [timepicker:timeDefaults]: {"from":"now-24h","to":"now"}
INFO: App setting [timepicker:timeDefaults]: "{\"from\":\"now-24h\",\"to\":\"now\"}"
INFO: Settings mismatch [timepicker:timeDefaults]: no

When I click on the button Go to Settings` I see the guide to check the status of the service
image

And the defined credentials for wazuh-ui, which match the output I saw within the wazuh-install-files.tar file.

I saw these issue popping up already here #2115 and here #2111. At least the last one is way older than the release 4.7.0 I am using right now.

Since I'm not a Developer, rather a sysadmin desperately needing this awesome tool to work, I don't know ow to debug the API with the curl command like suggested here

@Valkierja
Copy link

same problem here

@schneich
Copy link

schneich commented Jun 26, 2024

Hi @mighty-services,

could you share your yml? I have had the exact same error. In my case, I had changed the volume paths and changed them unknowingly to bind mounts. After figuring this out, the containers were able to connect to each other. For my network settings, have a look here and here.

Chris

@Valkierja
Copy link

Hi @mighty-services,、

could you share your yml? I have had the exact same error. In my case, I had changed the volume paths and changed them unknowingly to bind mounts. After figuring this out, the containers were able to connect to each other. For my network settings, have a look here and here

Chris 克

I reinstall my VM ubuntu and follow doc then do it again now im all good. I didnt save my yml file in the pase error case

@EricSeastrand
Copy link

Hi @mighty-services,

could you share your yml? I have had the exact same error. In my case, I had changed the volume paths and changed them unknowingly to bind mounts. After figuring this out, the containers were able to connect to each other. For my network settings, have a look here and here.

Chris

Wait, bind mounts don’t work? Any idea which paths are affected?
Facing a similar issue with the Docker setup guide. I too changed them to bind mounts out of habit.

@schneich
Copy link

Hi @EricSeastrand,

well, I tried to solve it by manually setting the permissions on my bind mounts, but folders are getting created, when a container starts for the first time and then folder permissions are wrong and it does not work...

Docker Community recommended this: https://dev.to/rimelek/everything-about-docker-volumes-1ib0#custom-volume-path-overview
You basically use Docker Volumes, but bind them to a custom path. It feels the same as a simple bind mount, but its technically different, as folder permissions are handled by the container, as it is with Volumes.

Have a look at my yaml and how I defined the custom volume paths.

Good luck,
Chris

@EricSeastrand
Copy link

EricSeastrand commented Jul 1, 2024

Thank you for this clue! I was able to get up and running by using volumes as recommended. Even better: I've found a way to still use bind mounts! At least, I think; it's working so far.

Disclaimer: This is almost certainly an unsupported configuration, but if you have some hard req about bind mounts, this may help.

Process goes like:

  1. Use the "stock" docker-compose.yml from wazuh official repo (which uses volumes not bind mounts)
  2. Start the stack one time the containers with docker volumes one time (to populate the volumes with files). Then stop the stack.
  3. Using a separate container, mount those volumes + bind-mount a directory to be the long-term home of the Wazuh files.
  4. rsync -av all the files in those volumes to the bind mount, taking care to preserve all the permissions and ownership (-a flag).
  5. Edit the docker-compose.yml, and replace all the docker volumes with regular bind mounts in your newly rsync'd directory.
  6. Start the stack again, test thoroughly.
  7. (maybe reqd?) Set node.max_local_storage_nodes=3 in wazuh.indexer.yml. *See below
  8. (optional) Do review @schneich's docker-compose.yml as it does many things "better" than the official one imo. Ex: set a timezone, give the stack and containers more concise names, restart: unless-stopped

For me, it was very important to use bind mounts because I regularly migrate containers/stacks between several hosts using Portainer. My storage layer is backed by GlusterFS (mounted as OS level; not using any docker plugins). I first tried @schneich approach, but when I tried migrating the stack from HostA to HostB, all 3 containers "started" but had errors and the Wazuh frontend would not load. Presumably because HostA created the volume, and now HostB is being told "create a volume in this dir, which already contains files and metadata". Using bind mounts solves all of that, because docker isn't expecting to have full control over the volume dir. Yes, permissions become a PiTA, but I'm OK with that tradeoff.

  • I set node.max_local_storage_nodes=3 to overcome an opensearch error about failure to acquire a file lock (which broke the whole Wazuh app just moments after starting the stack and seeing it "work"). This setting seemed safe because I never plan to have two of these running at once (much less doing concurrent writes). Still, this feels dirty and dangerous and I don't like it. Next step will be to run standalone opensearch node on each host, and handle redundancy at the application level. Besides: Directly connected NVMe is likely better for this workload than GlusterFS (as much as I love it).

Hope this helps someone and saves them the days-long debugging expedition I just returned from :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants