This is a project for making easy money.
In the era of short videos, whoever controls the traffic controls the money!
So I'm sharing this carefully crafted MoneyPrinter project with everyone.
It can: Use AI large model technology to generate various short videos in batches with one click.
It can: Batch edit short videos, making it a reality to generate short videos in batches.
It can: Automatically publish videos to Douyin, Kwai, Xiaohongshu, and Video Number.
Making money has never been easier!
If you find it useful, please give it a star!
MoneyPrinterPlus: Open Source AI Short Video Generation Tool
MoneyPrinterPlus AI Video Tool Detailed Usage Instructions
MoneyPrinterPlus AI Batch Short Video Editing Tool Usage Instructions
MoneyPrinterPlus Beginner's Tutorial for Generating Thousands of Short Videos with One Click
MoneyPrinterPlus Detailed Usage Tutorial
MoneyPrinterPlus Alibaba Cloud Detailed Configuration and Usage Tutorial
MoneyPrinterPlus Tencent Cloud Detailed Configuration and Usage Tutorial
MoneyPrinterPlus Microsoft Cloud Detailed Configuration and Usage Tutorial
MoneyPrinterPlus Automatic Environment Configuration and Automatic Running
Usage Introduction: Breaking News! Free One-Click Batch Editing Tool Is Here, Making Thousands of Short Videos a Day a Reality
Usage Introduction: MoneyPrinterPlus Detailed Usage Tutorial
- The video automatic publishing feature is now live!!!!!! Usage Tutorial: MoneyPrinterPlus One-Click Publishing of Short Videos to Video Number, Douyin, Kwai, and Xiaohongshu Is Now Live
- 20240710 Support for local large model: Ollama
- 20240708 Unbelievable! The automatic video publishing feature is now live. Supports Douyin, Kwai, Xiaohongshu, and Video Number!!!!!
- 20240704 Added automatic installation and startup scripts for easy usage by beginners.
- 20240628 Major update! Supports batch video editing, batch generation of a large number of unique short videos!!!!!!
- 20240620 Improved video merging effect to make the video ending more natural.
- 20240619 Speech recognition and synthesis supports Tencent Cloud. Requires enabling Tencent Cloud's speech synthesis and speech recognition functions.
- 20240615 Speech recognition and synthesis supports Alibaba Cloud. Requires enabling Alibaba Cloud's Intelligent Speech Interaction feature, and must enable speech synthesis and recording file recognition (Express Edition) functions.
- 20240614 Resource library supports pixabay, supports voice preview function, and fixes some bugs
- Automatically publish videos to various video platforms, supports Douyin, Kwai, Xiaohongshu, and Video Number!!!!!
- Batch video editing, batch generation of a large number of unique short videos
- Supports local material selection (supports various material formats such as mp4, jpg, png), and supports various resolutions.
- Cloud large model integration: OpenAI, Azure, Kimi, Qianfan, Baichuan, Tongyi Qwen, DeepSeek
- Local large model integration: Ollama
- Support for Azure voice features
- Support for Alibaba Cloud voice features
- Support for Tencent Cloud voice features
- Supports 100+ different voice types
- Supports voice preview function
- Supports 30+ video transition effects
- Supports video generation in different resolutions, sizes, and proportions
- Supports voice selection and speed adjustment
- Supports background music
- Supports background music volume adjustment
- Supports custom subtitles
- Covers mainstream AI large model tools on the market
- [] Support for local voice subtitle recognition model
- [] Support for more video resource acquisition methods
- [] Support for more video transition effects
- [] Support for more subtitle effects
- [] Integration of stable diffusion, AI image generation, and video synthesis
- [] Integration of Sora and other AI video large model tools, automatic video generation
Portrait | Landscape | Square |
---|---|---|
final-1718158522826.mp4 |
final-1718160166012.mp4 |
final-1718160533551.mp4 |
- Python 3.10+
- ffmpeg 6.0+
- LLM api key
- Azure voice service (https://speech.microsoft.com/portal)
- Or Alibaba Cloud Intelligent Speech Interaction (https://nls-portal.console.aliyun.com/overview)
- Or Tencent Cloud voice technology (https://console.cloud.tencent.com/asr)
Make sure to install ffmpeg and add the ffmpeg path to the environment variable.
- Make sure you have a running environment with Python 3.10+. If you are using Windows, ensure that the Python path is added to the PATH.
- Make sure you have a running environment with ffmpeg 6.0+. If you are using Windows, ensure that the ffmpeg path is added to the PATH. If ffmpeg is not installed, please install the corresponding version from https://ffmpeg.org/.
If you have the Python and ffmpeg environments set up, you can install the required packages using pip:
pip install -r requirements.txt
Navigate to the project directory and double-click on the setup.bat file to run it on Windows. On Mac or Linux, execute:
bash setup.sh
Use the following command to run the program:
streamlit run gui.py
If you used the automatic installation script, you can execute the following script to run the program:
On Windows, double-click start.bat. On Mac or Linux, execute:
bash start.sh
The log file will contain information about the program's execution, including the URL to access the program through a web browser.
Upon opening the URL in a web browser, you will see the following interface:
The left sidebar contains three configurations: Basic Configuration, AI Video, and Batch Video (under development).
Currently supported resources:
- pexels: www.pexels.com Pexels is a famous website for free images and video materials.
- pixabay: pixabay.com
You will need to register for an API key on the corresponding website to enable API usage.
Other resource libraries will be added in the future, such as videvo.net, videezy.com, etc.
The text-to-speech and speech recognition functions currently support:
- Azure's cognitive-services.
- Alibaba Cloud's Intelligent Speech Interaction
- Tencent Cloud's voice technology (https://console.cloud.tencent.com/asr)
- Azure: You will need to register for a key at https://speech.microsoft.com/portal. Azure offers 1 year of free service for new users and reasonable pricing.
- Alibaba Cloud: You will need to sign up at https://nls-portal.console.aliyun.com/overview and create a project. You must enable Alibaba Cloud's Intelligent Speech Interaction feature and activate the speech synthesis and recording file recognition (Express Edition) functions.
- Tencent Cloud: You will need to sign up at https://console.cloud.tencent.com/asr and enable the speech recognition and speech synthesis functions.
Text-to-speech from Microsoft Azure is currently the most outstanding service.
The large model area currently supports Moonshot, openAI, Azure openAI, Baidu Qianfan, Baichuan, Tongyi Qwen, DeepSeek, and others.
Moonshot API access: https://platform.moonshot.cn/
Baidu Qianfan API access: https://cloud.baidu.com/doc/WENXINWORKSHOP/s/yloieb01t
Baichuan API access: https://platform.baichuan-ai.com/
Alibaba Cloud Tongyi Qwen API access: https://help.aliyun.com/document_detail/611472.html?spm=a2c4g.2399481.0.0
DeepSeek API access: https://www.deepseek.com/
After setting up the basic configuration, you can proceed to the AI video section.
First, provide a keyword, and use the large model to generate video content:
You can choose the language and duration of the video content. If you are not satisfied with the generated video content and keyword, you can manually modify it.
You can choose the language and voice for the voiceover.
You can also adjust the voice speed.
Voice preview function will be supported in the future.
The background music is located in the project's bgmusic folder.
Currently, there are only two background music tracks provided. You can add your own background music files to this folder.
In the video configuration section, you can choose the video layout, frame rate, and video size.
You can also enable video transition effects. Currently, 30+ transition effects are supported.
Local video resource usage will be supported in the future.
The subtitle files are located in the fonts folder at the project root.
Currently, two font collections are supported: Songti and Pingfang.
You can choose the subtitle position, color, border color, and border width.
Finally, click the "Generate Video" button to create the video.
The page will display the specific steps and progress.
![image-20240612141446057](https://flydean-1301049335.cos.ap-guangzhou.myqcloud
After the video is generated, it will be displayed at the bottom, and you can directly play and watch the effect.
After starting the project, you can find the video splicing area in the upper left corner.
Click on it to enter the page of the batch video splicing tool.
In the video splicing area, we can configure up to 5 video segments.
You can control the number of segments by clicking "Add Segment" or "Delete Segment".
Some friends might ask, what are video segments?
A long video cannot have only one video theme. For example, the first half of your video may be about the style of clothing, and the second half may be about the material of the clothing.
So, the clothing style is segment 1, and the material is segment 2.
What we need to do is collect material for clothing style, which can be an mp4 video or jpg, png, and other image resources. The resolution should be as high as possible, otherwise the quality of the generated video may not be very good.
Then, put the material for the clothing style in the resource directory of video segment 1.
For example, in the resource directory shown in the image:
d:\downloads\work\scen1
Similarly, we put the material for the clothing material in the resource directory of video segment 2.
As shown below:
What are video resource texts?
Video resource texts are the textual descriptions you need to associate with the video segment.
You can prepare many texts for a segment, and then put these texts in a txt file. One text per line in the txt file.
The system will randomly select a line from the txt file as the final textual description of the video segment.
Below is an example of a text file:
Accurate cutting, smooth lines, the design of this vest perfectly fits the body shape, whether it is loose or slim, it can show your elegant posture.
Our designers perfectly blend classic and modern styles. Every line and every cut is to showcase your unique body shape.
Every precise cut is carefully calculated to create the most suitable style for your body. From the shoulder line to the waist cut, each part showcases your unique style.
Accurate cutting, smooth lines, the design of this vest is intended to make every wearer feel the tailored fit.
Accurate cutting, smooth lines, the design of this vest fits perfectly, showing elegant posture whether it's loose or slim.
Designers blend classic and modern styles, every line and cut showcases a unique body shape.
Every precise cut is carefully calculated to create a style that suits your body shape, showcasing a unique style from the shoulder line to the waist cut.
Accurate cutting and smooth lines, the vest design is aimed at a tailored fit, showcasing the wearer's personal charm.
Configure your video segment with the video resource and resource texts.
In the video dubbing area, you can select the dubbing language and the corresponding dubbing language. Currently, it supports 100+ dubbing languages.
You can also select different dubbing speeds to support different usage scenarios.
If you are unsure about the dubbing, you can click on "Try to Listen" to listen to the corresponding dubbing voice.
The background music is placed in the "bgmusic" directory under the project. You can add background music files to this folder as needed.
You can choose whether to enable background music and the default background music volume.
In the video configuration area, you can choose the video layout: portrait, landscape, or square.
You can choose the video frame rate, video size, and the minimum and maximum length of each video segment.
Most importantly, you can also enable video transition effects. Currently, it supports 30+ video transition effects.
If you need subtitles, you can click to enable the subtitle option, and set the subtitle font, subtitle font size, and subtitle color, etc.
If you are unsure how to set it, you can choose the default settings.
The system currently supports generating up to 100 videos at a time, according to your needs.
Finally, click the "Generate Video" button to generate the video.
The page will show the corresponding progress.
The generated videos will be displayed at the bottom of the page, and you can play them as needed.
If you have generated multiple videos, you can find them in the "final" directory of the project folder.
The automatic publishing tool is essentially based on the selenium automation framework.
By simulating human click operations, it can complete most tasks that would otherwise require manual operation. This frees up your hands.
Additionally, there are two ways to implement this automation. One is to start a browser while running the program. The other is to attach to an existing browser to operate on its page.
This tool chooses to attach to an existing browser.
This is mainly because some video platforms require scanning a QR code with a mobile phone to log in, and it is difficult to simulate this login process in the program.
Currently, automatic publishing supports two browsers: Chrome and Firefox. You can choose one according to your needs.
The current mainstream browser is undoubtedly Chrome. So, let's first talk about how to support Chrome.
-
First, you need to download and install Chrome. Remember your version number. You can download Chrome from the official website or from this page ChromeDriver download page.
-
You need to download the corresponding ChromeDriver that matches your Chrome browser version from the ChromeDriver download page. Make sure you download the version that matches your operating system and Chrome version.
After downloading, unzip the ChromeDriver to a local directory. It's best not to use a directory path with Chinese characters, as it may not run properly.
- Start Chrome in debug mode.
If you are using a Mac, you can first set an alias for Chrome:
alias chrome="/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome"
Start Chrome in debug mode:
chrome --remote-debugging-port=9222
If you are using Windows, you can add the following to the target of the Chrome desktop shortcut:
--remote-debugging-port=9222
Then double-click to open Chrome in debug mode.
Apart from Chrome, the most commonly used browser is probably Firefox.
So, we also provide support for Firefox.
To use Firefox, you need to follow these steps:
-
Download and install Firefox.
-
Download the geckodriver driver. Download the geckodriver that matches your Firefox browser version. Make sure you download the version that matches your operating system and Firefox version.
After downloading, unzip the geckodriver to a local directory. It's best not to use a directory path with Chinese characters, as it may not run properly.
- Start Firefox in debug mode:
Similar to Chrome, you need to start Firefox with the following command: -marionette -start-debugger-server 2828
Note: The port here must be 2828 and cannot be customized.
Now, if you open Firefox, you will see that the navigation bar turns red, indicating that you have started remote debugging mode.
Enter about:config
to see that the marionette.port port is 2828.
In a Windows environment, simply double-click "start.bat" to start.
In a Mac environment, execute "sh start.sh" in the project root directory.
The browser will automatically open the MoneyPrinterPlus homepage.
Click on the "Video Automatic Publishing Tool" on the left to access the page for the video automatic publishing tool.
You can select the driver type, Chrome or Firefox.
The driver location is the location of the previously downloaded chromedirver or geckodriver.
The video content directory is the directory where your video content is located.
After modifying the video directory, the video files and text files in the video directory will be automatically listed.
The video files are the content you want to publish.
What about the text files?
The text files are the textual content associated with the video.
For example, if you want to publish a video about Tang poetry, the corresponding content of the text file would be:
Wang Wei: To Guo Geshi
The high pavilion door is covered in lingering light, with peach and plum trees shading and willow catkins flying.
The sparse bells in the forbidden palace chime late, and the crying birds in the provincial office are rare.
Morning shakes the jade pendants and rushes to the golden palace, while evening offers the heavenly book and bows in the imperial hall.
I want to accompany you without growing old, and I'll lay down my morning attire due to illness.
Remember, the first line must be the title of the video.
The content of the other lines is up to you.
Then, take a look at the following page:
The video site configuration should be quite straightforward, and friends who have been to kindergarten should understand it.
Title Prefix: If you need to add an additional prefix to the video title, you can set it here.
Collection Name: Some video sites require you to select a collection. This is the name of the collection (the program will not create the collection for you, so you need to create it in advance on the website).
Video Tags: It's self-explanatory, it's the tags, separated by spaces.
For Kuaishou, there is an additional field configuration.
You can choose whether to enable Douyin, Kuaishou, Video Number, or Little Red Book.
Next, you can prepare to publish the video.
But before publishing, you can click on "Environment Check".
If your homepage opens automatically, it means your environment configuration is fine. You can go ahead and publish the video.
Because all video sites require login, you need to open the corresponding site and log in to your account first before clicking the "Publish Video" button.
Once all your accounts are logged in, click the "Publish Video" button.
Start your journey of freedom.
The running interface will look something like this:
For friends who encounter problems, you can first check the summary of common issues here to see if you can solve the problem.
If anyone has any questions or ideas, feel free to join the group for discussion. Friends who think the project is good can buy the author a cup of tea.
交流群 | 我的微信 |
---|---|