Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bag database sometimes stops scanning for new files #95

Closed
pjreed opened this issue Jun 18, 2019 · 14 comments
Closed

Bag database sometimes stops scanning for new files #95

pjreed opened this issue Jun 18, 2019 · 14 comments

Comments

@pjreed
Copy link
Contributor

pjreed commented Jun 18, 2019

It seems like if the database encounters a SQL exception when parsing and inserting a new bag file, that can kill the thread that scans for new files, and that will prevent it from picking up new files again until it's restarted. I haven't reliably reproduced this yet, but I have seen it multiple times.

@CDitzel
Copy link

CDitzel commented Mar 10, 2020

can we do something about that or is this repo dead?

@pjreed
Copy link
Contributor Author

pjreed commented Mar 10, 2020

It's not dead, but I also haven't had time to dedicate to it in a while and also haven't been able to reproduce this reliably enough to diagnose it. If somebody else has time to look into this or is able to determine the conditions necessary to reproduce it, that would be welcome.

@ptulpen
Copy link

ptulpen commented Mar 12, 2020

Hello, I have the same issues after moving to a new server
A manually triggered search finds the bag files
Is there a way to trigger the scan via cron as a workaround ?

@pjreed
Copy link
Contributor Author

pjreed commented Mar 24, 2020

You could probably use curl or wget to call the API via cron and trigger a scan. You could use the developer panel in your web browser to get the URL it's calling when you click the button to do a manual search, and then try calling that. You may also need to authenticate with curl/wget, but I haven't tried doing that before...

@ptulpen
Copy link

ptulpen commented Oct 20, 2020

Hello,
I tried it now with version 3, since the API is better documented ;)
For now I just simple intergrated password and start the container with:

podman run -d -p 8087:8080 [...]       -e DB_DRIVER=org.postgresql.Driver     -e DB_PASS=XXX     -e DB_URL="jdbc:postgresql://whatever/bag3_database"     -e DB_USER=bag_database     -e METADATA_TOPICS="/metadata"     -e VEHICLE_NAME_TOPICS="/vehicle_name"     -e GPS_TOPICS="/localization/gps, /gps, /imu/fix"     -e  ADMIN_PASSWORD="XXX"   
 bag-database:3.0.0
url=http://localhost:8087
CSRF=`curl -s -X GET -c cookies.txt -L ${url} | grep csrf | sed -r 's/^.*value="([-a-z0-9]*)".*$/\1/'`
curl -s -c cookies.txt -b cookies.txt -F "username=admin" -F "password=XXX" -F "_csrf=${CSRF}" -L  ${url}/ldap_login
curl -s -c cookies.txt -b cookies.txt -F "_csrf=${CSRF}"   -X POST  "${url}/admin/forceScan"

But in the authentication step I get


<body>
<div class="container">
    <div class="alert alert-dismissable alert-danger">
        <button type="button" class="close" data-dismiss="alert" aria-hidden="true">&times;</button>
        <span>403 returned for /ldap_login with message Forbidden</span>
    </div>
    <div>
        &copy; 2015-2020 Southwest Research Institute <span>3.0</span>
    </div>
</div>
</body>

@pjreed
Copy link
Contributor Author

pjreed commented Oct 20, 2020

I think that if you're not using LDAP, the URL to sign in will be slightly different; try this instead:

curl -s -c cookies.txt -b cookies.txt -F "username=admin" -F "password=XXX" -F "_csrf=${CSRF}" -L  ${url}/signin

If that works, let me know and I'll update the documentation to indicate it.

@ptulpen
Copy link

ptulpen commented Oct 26, 2020

Hello,
unfortunally I still get:
403 returned for /signin with message Forbidden

(Plan for long term is to use ldaps anyway, but there I also have issues, see #116 )

@pjreed
Copy link
Contributor Author

pjreed commented Nov 3, 2020

All right, I think I've figured out what's going on here -- first, I think I might've made a typo in my original example, I need to double-check it against an LDAP database, but I think it's not getting the right value for the CSRF token -- and also, it looks like after logging in as an admin, the server issues a new CSRF token, and that invalidates the original.

So actually, the steps should be:

  1. Get a CSRF token; also, after you have one, double-check to make sure it's a valid UUID

    $ CSRF=`curl -s -X GET -c cookies.txt -L ${url} | grep csrfToken | sed -r 's/^.*csrfToken = "([-a-z0-9]*)".*$/\1/'`
    $ echo $CSRF
    30608b03-23a6-4a28-82d3-7c9fd68c16d1
    
  2. Log in and get the new CSRF token, then check to verify that it's changed:

    $ CSRF=`curl -s -c cookies.txt -b cookies.txt -F "username=admin" -F "password=letmein" -F "_csrf=${CSRF}" -L  ${url}/signin | grep csrfToken | sed -r 's/^.*csrfToken = "([-a-z0-9]*)".*$/\1/'`
    $ echo $CSRF
    d1a305d8-49b7-4787-9a2b-dfad0a3a7c04
    
  3. Now execute a POST:

    curl -s -c cookies.txt -b cookies.txt -F "_csrf=${CSRF}"   -X POST  "${url}/admin/forceScan"
    

If it works, curl won't print any output, but the bag database's log file should indicate that it's scanning. That works for me, let me know if it works for you and I'll go through the documentation and make sure it's all correct.

@ptulpen
Copy link

ptulpen commented Nov 5, 2020

ah I got a little bit further:
I works with the script when I start a new container where I don't modify the target url 👍
But I get a 403 in the second command, when I use another suburl like in #103
script currently looks like


#url=http://localhost:8091 # No suburl, that works 
url=http://localhost:8080/bag # I get the first CSRF, but  a 403 Error and no new second CSRF 
CSRF=`curl -s   -X GET -c cookies.txt -L ${url} | grep csrfToken | sed -r 's/^.*csrfToken = "([-a-z0-9]*)".*$/\1/'`
echo first  $CSRF

CSRF=`curl -vvv  -c cookies.txt -b cookies.txt -F "username=admin" -F "password=${password}" -F "_csrf=${CSRF}" -L  ${url}/signin | grep csrfToken | sed -r 's/^.*csrfToken = "([-a-z0-9]*)".*$/\1/'`
echo second  $CSRF

curl  -c cookies.txt -b cookies.txt -F "_csrf=${CSRF}"   -X POST  "${url}/admin/forceScan"





@pjreed
Copy link
Contributor Author

pjreed commented Nov 5, 2020

That's interesting; in my test environment, I actually have it running at http://localhost:8080/bag-database-3.0.0, and using that URL for making requests works fine for me. Is the path used to access the bag database inside its own container the same as the one used for your reverse proxy? If you just run curl -v -c cookies.txt -b cookies.txt -F "username=admin" -F "password=${password}" -F "_csrf=${CSRF}" -L ${url}/signin for your second step, what does it output?

@ptulpen
Copy link

ptulpen commented Nov 5, 2020

* About to connect() to localhost port 8080 (#0)
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> POST /bag/signin HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost:8080
> Accept: */*
> Cookie: SESSION=d49b77ac-f70b-465c-98e0-4fb60e834b6b
> Content-Length: 395
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=----------------------------749e8b7b7a28
>
< HTTP/1.1 100
< HTTP/1.1 403
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
< Expires: 0
< X-Frame-Options: DENY
< Content-Type: text/html;charset=UTF-8
< Content-Language: en-US
< Transfer-Encoding: chunked
< Vary: Accept-Encoding
< Date: Thu, 05 Nov 2020 17:37:34 GMT
< Connection: close
<
<!DOCTYPE html>
<html>
<head>
    <title>Error page</title>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
    <link href="/bag/resources/css/bootstrap.min.css" rel="stylesheet" media="screen"/>
    <link href="/bag/resources/css/core.css" rel="stylesheet" media="screen"/>
</head>
<body>
<div class="container">
    <div class="alert alert-dismissable alert-danger">
        <button type="button" class="close" data-dismiss="alert" aria-hidden="true">&times;</button>
        <span>403 returned for /bag/signin with message Forbidden</span>
    </div>
    <div>
        &copy; 2016 Southwest Research Institute <span>2.3</span>
    </div>
</div>
</body>
* Closing connection 0

@pjreed
Copy link
Contributor Author

pjreed commented Nov 5, 2020

One thing I do notice there is that it looks like you're running an old version (2.3); I don't know for sure if it's related, but I fixed some authentication-related bugs in 3.0 that could be preventing it from working properly. I also just made a new release (3.1.0) that adds an environment variable that can be used to change the path the container uses to serve the application, BAGDB_PATH, which I think will make it easier to integrate with a reverse proxy, so you might try upgrading to the latest version and see if that helps.

@ptulpen
Copy link

ptulpen commented Nov 6, 2020

It works with the new version 👍
Currently I have to stick with 2.3 at least until I have ldaps working, since else it allows anonymous uploads

But then I have at least a work around for this issue

@pjreed
Copy link
Contributor Author

pjreed commented Nov 6, 2020

That's a good point, I think it would make sense to be able to disable uploading and possibly other capabilities (such as editing mutable fields or running scripts) for unauthenticated users.

I'll make a new issue for that -- I think that version 3.0 actually fixed the bug that originally caused this issue, anyway, and the API should now be documented well enough that it's possible to force it to scan.

@pjreed pjreed closed this as completed Nov 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants