Skip to content

Resources of our SIGGRAPH Asia 2024 paper "FilmAgent: Automating Virtual Film Production via Multi-Agent Collaboration". New versions in the making!

Notifications You must be signed in to change notification settings

HITsz-TMG/FilmAgent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

If you like our project, please consider giving us a star ⭐ on GitHub to stay updated with the latest developments.

Project Page Project Page Project Page

🎨 Framework

Following the traditional film studio workflow, we divide the whole virtual film production process into three sequential stages: planning, scriptwriting and cinematography, and apply the Critique-Correct-VerifyDebate-Judge collaboration strategies. After these stages, each line in the script is specified with the positions of the actors, their actions, their dialogue, and the chosen camera shots.

🌟 How to use FilmAgent

  1. Install Package
conda create -n filmagent python==3.9.18
conda activate filmagent
pip install -r env.txt
  1. Create Script and Logs folders in the Filmagent directory, then replace the absolute pathname '/path/to/' with your specific path and modify the topic in the main.py. Modify the api_key and organization number in LLMCaller.py. Run the following commands to get the movie script created by the agents collaboratively:
cd /path/to/FilmAgent
conda activate filmagent
python main.py
  1. We use ChatTTS to provide voice acting for the characters in the script. You need to download the ChatTTS repository to the TTS directory. Then replace the absolute pathname '/path/to/' with your specific path in the tts_main.py. Run the following commands to deploy the text-to-speech service:
cd /path/to/TTS
conda create -n tts python==3.9.18
conda activate tts
pip install -r tts_env.txt
python tts_main.py
  1. Modify the Script_path, actos_path, Audio_path and url in the GenerateAudio.py. Run the following commands to get the audio files:
cd /path/to/FilmAgent
conda activate filmagent
python GenerateAudio.py
  1. We now have the script.json, actors_profile.json, and a series of .wav audio files. Next, we need to execute the script in Unity. The recommended version of the Unity editor is Unity 2022.3.14f1c1. You need to download the Unity project file we provide. After decompression, open TheBigBang\Assets\TheBigBang\Manyrooms.unity with Unity. Then replace all the absolute pathnames '/path/to/' with your specific path in TheBigBang\Assets\Scirpts\StartVideo.cs and TheBigBang\Assets\Scirpts\ScriptExecute.cs. Press 'ctrl+R' in the unity interface to recompile, click 'Play' to enter Game mode, then press 'E' to start executing the script (sometimes the audio files load slowly, so you may need to play it 2 or 3 times before it can run normally).
  1. For the tests on 15 topics in our experimental section, we provide three .py files: test_full.py (The full FilmAgent framework, utilizing multi-agent collaboration.), test_no_interation.py (A single agent is responsible for planning, scriptwriting, and cinematography, representing our FilmAgent framework without multi-agent collaboration algorithms.) and test_cot.py (A single agent generates the chain-of-thought rationale and the complete script).

🌈 Case Show

The following table records some comparisons of the scripts and camera settings before (left) and after (right) multi-agent collaboration, with excerpts from their discussion process.


Case #1 and #2 are from the Critique-Correct-Verify method in Scriptwriting #2 and #3 stages respectively. Case #3 and #4 are from the Debate-Judge method in Cinematography.

  • Case #1 shows that Director-Screenwriter discussion reduces hallucinations in non-existent actions (e.g., standing suggest), enhances plot coherence, and ensures consistency across scenes.
  • Case #2 shows that Actor-Director-Screenwriter discussion improves the alignment of dialogue with character profiles.
  • Case #3, in the Debate-Judge method in cinematography, demonstrates the correction of an inappropriate dynamic shot, which is replaced with a medium shot to better convey body language.
  • Case #4 replaces a series of identical static shots with a mix of dynamic and static shots, resulting in a more diverse camera setup.

Citation

If you find FilmAgent useful for your research and applications, please cite using this BibTeX:

@inproceedings{filmagent_xu_2024,
author = {Xu, Zhenran and Wang, Jifang and Wang, Longyue and Li, Zhouyi and Shi, Senbao and Hu, Baotian and Zhang, Min},
title = {FilmAgent: Automating Virtual Film Production Through a Multi-Agent Collaborative Framework},
year = {2024},
isbn = {9798400711404},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3681758.3698014},
doi = {10.1145/3681758.3698014},
booktitle = {SIGGRAPH Asia 2024 Technical Communications},
articleno = {15},
numpages = {4},
series = {SA '24}
}

About

Resources of our SIGGRAPH Asia 2024 paper "FilmAgent: Automating Virtual Film Production via Multi-Agent Collaboration". New versions in the making!

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages