-
Install
docker
following the instructions in the official documentation: Debianamd64
, Fedoraamd64
, Raspberry Piarmhf
, Ubuntuamd64
. -
Pull the Turing @ DMF docker image
docker pull ghcr.io/dmf-unicatt/turing-dmf:latest
- Run a new docker container
docker run -p 80:80 ghcr.io/dmf-unicatt/turing-dmf:latest
Turing will be available at http://localhost
. Furthermore, the terminal will display (towards the end of a long initialization message) the username and password of the administrator account, which can be subsequently changed through the web interface.
The basic configuration is useful for local testing, but should not be used in production because, for example, the database is not shared between different runs.
-
Install
docker
as in the basic configuration. -
Clone the Turing @ DMF repository as follows:
git clone --recurse-submodules https://github.com/dmf-unicatt/turing-dmf.git
- All the following instructions are supposed to be run in the
turing-dmf/docker
directory:
cd turing-dmf/docker
- Create a docker volume that will contain the database:
./create_volume.sh
- Create a
ghcr.io/dmf-unicatt/turing-dmf:latest
docker image based on the current Turing @ DMF repository:
./create_image.sh
- Create a docker container based on the newly created
ghcr.io/dmf-unicatt/turing-dmf:latest
docker image:
./create_container.sh
- Database is created upon the first run of the container with
./start_container.sh
The terminal will display (towards the end of a long initialization message) the username and password of the administrator account, which can be subsequently changed through the web interface.
- All the following instructions are supposed to be run in the
turing-dmf/docker
directory:
cd turing-dmf/docker
- Start the container, including the
django
server:
./start_container.sh
Turing will be available at http://host-server
.
- Attach a terminal to the running docker container
./attach_terminal.sh
- Explore the database volume with
./explore_volume.sh
- Stop the running docker container
./stop_container.sh
- The above scripts internally use three hidden files
.container_id
,.network_id
and.volume_id
to store the result of running the above commands. You may protect those files from accidental deletion by running
sudo ./prevent_accidental_deletion.sh
-
If you want to create a new container, make sure to stop the old one (if it were running), and remove the file
.container_id
. Note that, to prevent data loss, this does not delete the old container itself, it only disables it from being used by the above scripts. -
If you want to create a new volume, remove the file
.volume_id
. Note that, to prevent data loss, this does not delete the old volume itself, it only disables it from being used by the above scripts.