You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The cron container do not have access to the var/tmp/imports folder. so, it needs to be added / shared with both mautic_web and cron container.
Also, I think it's good to have all the mautic crons automatically be set-up, not just the 3 currently has, with a generic limits/configs, by default.
And I think usage of supervisord may backfire (just assuming), as it'll crash completely (correct me if I'm wrong) after 10 retries... and AFAIK there's no "indefinitely retry" settings in supervisord. So, if for example, I change the queue settings to sync://, the supervisord services may crash (and potentially be retried with docker-compose restart: always policy, but is that ideal?)
The text was updated successfully, but these errors were encountered:
I also detected this need for enhancement because the files sent for import are generated in the "mautic_web" container.
I tried to add a shared volume between the "mautic_web" and "mautic_cron" services in the "x-mautic-volumes" section of the file docker-compose.yml but the "mautic_web" service fails.
To work around the case I added in the "mautic_web" service the execution of the job "mautic:import" in crontab:
The
cron
container do not have access to thevar/tmp/imports
folder. so, it needs to be added / shared with both mautic_web and cron container.Also, I think it's good to have all the mautic crons automatically be set-up, not just the 3 currently has, with a generic limits/configs, by default.
And I think usage of supervisord may backfire (just assuming), as it'll crash completely (correct me if I'm wrong) after 10 retries... and AFAIK there's no "indefinitely retry" settings in supervisord. So, if for example, I change the queue settings to
sync://
, the supervisord services may crash (and potentially be retried with docker-compose restart: always policy, but is that ideal?)The text was updated successfully, but these errors were encountered: