Turn your daily selfies into a good-looking timelapse.
This script automatically scales, rotates, crops, and captions all frames so your eyes are aligned in each photo, and compiles these frames into a timelapse.
If you need help with installing or using Facemation, please feel free to start a discussion or contact me directly.
- Download FFmpeg.
(If you have 7-Zip installed,
download the
.7z
archive instead.) - Unzip the downloaded FFmpeg archive into a new directory.
- Download the latest version of Facemation for Windows.
- Unzip the downloaded Facemation archive into another new directory.
- Enter the directory where you unzipped FFmpeg, enter the
bin
directory, and copy the fileffmpeg.exe
to the directory where you unzipped Facemation.
You should now havefacemation.exe
andffmpeg.exe
in the same directory.
- Install FFmpeg.
On Ubuntu/Debian, you can simply do
apt install ffmpeg
. - Download the latest version of Facemation for Linux.
- Unzip the downloaded archive into a new directory.
Unfortunately, I don't have macOS, which means that I cannot create an executable for macOS systems. Your best bet is probably to run the Python scripts directly by following the development instructions below, but even then I cannot guarantee it will work. If you have suggestions for how I can solve this, please let me know by opening an issue, starting a discussion, or contacting me directly.
- Enter the directory where you installed Facemation.
- Put the images you want Facemation to process in the
input
directory. - Rename files if necessary so that they are in the right order. Images are processed in natural sort order.
- Execute
facemation
by double-clicking it. - Check the created video in
output/facemation.mp4
.
All intermediate results are heavily cached, so subsequent runs are much faster.
You can change how Facemation behaves by editing the config.py
file.
Below are some examples of how you can configure Facemation.
Check config_default.py
for
a list of all options.
If you do not have FFmpeg, you can disable it. Facemation will still work, but will skip the final step of creating a video.
config = {
"ffmpeg": {
"enabled": False,
}
}
This code assumes that each filename is something like IMG_20230104_174807.jpg
.
from datetime import datetime
config = {
"caption": {
"enabled": True,
"generator": (lambda filename: str(datetime.strptime(filename, "IMG_%Y%m%d_%H%M%S.jpg").date())),
},
}
This code assumes that each filename is something like IMG_20230104_174807.jpg
.
from datetime import datetime
important_date = datetime(year=2023, month=1, day=1).date()
config = {
"caption": {
"enabled": True,
"generator":
(lambda filename: str((datetime.strptime(filename, "IMG_%Y%m%d_%H%M%S.jpg").date() - important_date).days)),
},
}
Put your music file in the directory that contains config.py
.
Then, update your configuration as below;
replace music.mp3
with the name of your music file.
config = {
"ffmpeg": {
"custom_inputs": ["-i", "music.mp3"],
"custom_output_options": ["-map", "0:v", "-map", "1:a", "-shortest"],
},
}
If you are a developer and want to help with or change Facemation, these instructions are for you.
shape_predictor_5_face_landmarks.dat
(extract and store insrc/main/python/resources/
)Roboto-Regular.ttf
(extract and store insrc/main/python/resources/
)
- Python 3.10
The commands in this README invoke Python aspython
. Usepython3
instead if you have not linkedpython
topython3
. - venv
You can check if you havevenv
installed by runningpython -m venv
; if you see usage information, you havevenv
. To installvenv
on Debian/Ubuntu, runapt install python3-venv
. - CMake (required to build
dlib
)
On Debian/Ubuntu, install withapt install cmake
. - C++ compiler (required to build
dlib
)
On Debian/Ubuntu, install withapt install g++
.
- Always use PowerShell.
- Python 3.10
- CMake (required to build
dlib
) - C++ compiler (required to build
dlib
)
You will need either Visual Studio (an editor) or Visual Studio Tools (a library). You can find both on the Visual Studio downloads page.
- Check that you satisfy the development requirements.
- Create a venv:
python -m venv venv/
- Activate the venv:
- Linux
source venv/bin/activate
- Windows PowerShell
./venv/Scripts/activate
- Linux
- Install dependencies:
python -m pip install --upgrade pip wheel python -m pip install -r requirements.txt
- (Optional) Create
config_dev.py
to override bothconfig_default.py
andconfig.py
.Note thatcp src/main/resources/config_empty.py config_dev.py
config_dev.py
is always searched for in the current working directory.
- Activate the venv:
- Linux
source venv/bin/activate
- Windows PowerShell
./venv/Scripts/activate
- Linux
- Run script:
python src/main/python/facemation.py
- All development requirements listed above
- (Linux only) Requirements for
staticx
- (Windows only) Always use PowerShell
- (Windows only) Windows SDK
Copy all DLLs inC:/Program Files (x86)/Windows Kits/10/Redist/[version]/ucrt/x64
tosrc/python/resources/
. Note that the[version]
in the path differs per system. I don't know what the implications of this are.
- Check that you satisfy the distribution requirements.
- Check the version number in the
version
file. - Check that
config_empty.py
is up-to-date withconfig_default.py
. - Build executable into
dist/
and create.zip
distribution:- Linux
./build_linux.sh
- Windows PowerShell
./build_windows.ps1
- Linux
- Run executable:
dist/facemation
In chronological order of contribution:
- Thanks to Luc Everse for finding a bunch of bugs in v1.0.0!
If I should add, remove, or change anything here, just open an issue or email me!