diff --git a/README.md b/README.md
index d47edc7..bc3e451 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,5 @@
-# Anime Downloader [![Total Downloads](https://img.shields.io/github/downloads/Oshan96/Anime-Downloader/total.svg?style=for-the-badge)](https://github.com/Oshan96/Anime-Downloader/releases)
+![Banner](docs/images/banner.png)
+# Monkey-DL (Anime Downloader) [![Total Downloads](https://img.shields.io/github/downloads/Oshan96/Anime-Downloader/total.svg?style=for-the-badge)](https://github.com/Oshan96/Anime-Downloader/releases)
You can now bulk download your favourite anime episodes for various websites, in various resolutions, with or without filler episodes
@@ -6,17 +7,37 @@ You can now bulk download your favourite anime episodes for various websites, in
## Donations
If this project is helpful to you and love my work and feel like showing love/appreciation, would you like to buy me a coffee?
-
+
+
+## Features
+* Download Anime from various [supported websites](#Supported-Websites)
+* Batch download episodes in the given range at once
+* High speed downloads
+* Download multiple episodes at once
+* Select the resolution (from the available resolutions for the website)
+* Select sub/dub (Check whether the website supports selective sub/dub downloads from [here](#Supported-Websites))
+* Choose whether filler episodes need to be downloaded or not by selecting "Download fillers" (By providing animefillerlist URL)
+* Name the files in "Episode - {episode_number} - {episode_title}" format by providing animefillerlist URL
+* Choose the directory files need to be downloaded into
+* Custom HLSDownloader (Now FFMPEG installation is optional from v1.0.4 upwards)
+* Custom decryptors for encrypted websites
## Supported Websites
-| Website | Need recaptcha token? | Supported resolutions | FFMPEG needed? | File Size | Additional Notes |
-|--- |--- |--- |--- |--- |--- |
-| [9Anime](https://9anime.to/) | Yes | Default only | No | 500-600MB | Will always work, provided token |
-| [4Anime](https://4anime.to/) | No | Default only | No | Around 150MB | Upon failure, visit 4anime website and restart anime downloader. Fastest downloads |
-| [AnimePahe](https://animepahe.com/) | No | 720p, 1080p | No | 720p: ~150MB, 1080p: ~200MB | 2captcha API key is needed to download from AnimePahe. Also download speed is capped by host |
-| [AnimeFreak](https://www.animefreak.tv/) | No | Default only | No | ~90-100MB | Downloading from AnimeFreak would be a bit slow at times |
-| [GoGoAnime](https://gogoanime.io/) | No | Mostly 360p, 480p | Yes | - | gogoanime.io and gogoanime.video are supported. gogoanime.pro support will be added in future |
-| [AnimeUltima](https://www.animeultima.to/) | No | 240p, 360p, 480p, 720p, 1080p | Yes | 1080p is around 1GB | AnimeUltima is having issues in their end. Will be supported again once they are backup |
+
+#### Note
+After v1.0.4 release, Monkey-DL now uses a custom HLSDownloader to download from streams, which is over 10x faster than downloading from FFMPEG. FFMPEG is now optional to be installed in system. FFMPEG will only be used if there is any error occured. So for now, it is safe FFMPEG to be installed as well.
+FFMPEG dependency will be removed completely soon in a later release
+
+| Website |Sub/Dub selection | Need recaptcha token? | Supported resolutions | FFMPEG needed? | File Size | Additional Notes |
+|--- |--- |--- |--- |--- |--- |--- |
+| [9Anime](https://9anime.to/) | No | Yes | Default only | No | 500-600MB | Will always work, provided token |
+| [4Anime](https://4anime.to/) | No | No | Default only | No | Around 150MB | Upon failure, visit 4anime website and restart anime downloader. Fastest downloads |
+| [AnimePahe](https://animepahe.com/) | No | No | 720p, 1080p | No | 720p: ~150MB, 1080p: ~200MB | 2captcha API key is needed to download from AnimePahe. Also download speed is capped by host |
+| [Twist](https://twist.moe/) | No | No | 1080p | No | 500MB+ | Files are very high quality and fast downloads. Seems to be raw HorribleSub content |
+| [AnimeFreak](https://www.animefreak.tv/) | Yes | No | Default only | No | ~90-100MB | Downloading from AnimeFreak is generally fast |
+| [GoGoAnime](https://gogoanime.io/) | No | No | Mostly 360p, 480p | Optional | - | gogoanime.io and gogoanime.video are supported. gogoanime.pro support will be added in future |
+| [AnimeUltima](https://www.animeultima.to/) | Yes | No | Sub: 240p, 360p, 480p, 720p, 1080p
Dub: Default only | Optional | 1080p is 1GB+ | File sizes are relatively large |
+| [AnimeFlix](https://animeflix.io/) | Yes | No | Sub: 240p, 360p, 480p, 720p, 1080p
Dub: Default only | Optional | 1080p is 1GB+ | File sizes are relatively large |
## Download Anime Downloader [Windows]
> Note : Currently only windows executable is provided (Linux, Mac users go to [Build from source](#Building-from-source))
@@ -35,7 +56,7 @@ Open settings.json and set [2captcha](https://2captcha.com/) API key in "api_key
*Don't have 2captcha API key? Don't worry! You can still use this to download anime. Check the "FAQ" section on [how to download if you don't have a 2captcha API key](#Q---I-don't-have-a-2captcha-API-key,-is-there-any-workaround-for-that?)*
-##### And in order to download from some websites (like animeultima.to) Anime Downloader requires you to have [FFMPEG](https://www.ffmpeg.org/) to be downloaded ([Check whether your anime website needs FFMPEG](#Supported-Websites))
+##### In order to download from some websites (like animeultima.to) Anime Downloader requires you to have [FFMPEG](https://www.ffmpeg.org/) to be downloaded ([Check whether your anime website needs FFMPEG](#Supported-Websites))
- You can download FFMPEG from [here](https://www.ffmpeg.org/download.html)
- And then add the ffmpeg executable to system path
@@ -49,12 +70,18 @@ sudo apt install ffmpeg
#### Still not able to download? Go ahead and post your issue [here](https://github.com/Oshan96/Anime-Downloader/issues). And I will look into the error and give necessary fixes!
## Running the application
-Navigate to the extracted folder and open a cmd or powershell window from that folder and execute "anime-dl.exe" from command line.
+Navigate to the extracted folder and open a cmd or powershell window from that folder and execute "monkey-dl.exe" from command line.
## How to download using GUI version (v0.1.1-alpha upwards)
It is same as the CLI version, but provided a graphical user interface to collect necessary parameters.
-Execute the "anime-dl.exe" to start.
+Note : After v1.0.4 and anove, Anime Downloader was named as "Monkey-DL" and the executable is called "monkey-dl.exe"
+
+* v1.0.4 and above:
+ Execute the "monkey-dl.exe" to start.
+
+* v1.0.3 and lower:
+ Execute the "anime-dl.exe" to start.
If you're running from source files, execute the "anime-dl.py" script
@@ -93,7 +120,7 @@ Above mentioned are the arguments you should use in order to download anime.
### Q - How can I download one piece anime episodes from 10 to 20?
```bash
-./anime-dl.exe -u https://9anime.to/watch/one-piece.ov8/169lyx -s 10 -e 20 -n https://www.animefillerlist.com/shows/one-piece
+./anime-dl.py -u https://9anime.to/watch/one-piece.ov8/169lyx -s 10 -e 20 -n https://www.animefillerlist.com/shows/one-piece
```
Explantion of the commands used :
@@ -105,7 +132,7 @@ Explantion of the commands used :
### Q - How can I download one piece anime episodes 30 to 70 into "D:\Anime\One Piece" folder?
```bash
-./anime-dl.exe -u https://9anime.to/watch/one-piece.ov8/169lyx -s 30 -e 70 -n https://www.animefillerlist.com/shows/one-piece -d "D:\Anime\One Piece"
+./anime-dl.py -u https://9anime.to/watch/one-piece.ov8/169lyx -s 30 -e 70 -n https://www.animefillerlist.com/shows/one-piece -d "D:\Anime\One Piece"
```
Explanation of commands :
@@ -114,7 +141,7 @@ Explanation of commands :
### Q - How can I download bleach episodes 100 to 130 into "D:\Anime\Bleach" folder and download 4 episodes at once?
```bash
-./anime-dl.exe -u https://9anime.to/watch/bleach.6j9/lz7wvq -s 100 -e 130 -n https://www.animefillerlist.com/shows/bleach -d "D:\Anime\Bleach" -t 4
+./anime-dl.py -u https://9anime.to/watch/bleach.6j9/lz7wvq -s 100 -e 130 -n https://www.animefillerlist.com/shows/bleach -d "D:\Anime\Bleach" -t 4
```
Explanation of commands :
@@ -123,7 +150,7 @@ Explanation of commands :
### Q - How can I download bleach episodes 100 to 130 without filler episodes into "D:\Anime\Bleach" folder and download 3 episodes at once?
```bash
-./anime-dl.exe -u https://9anime.to/watch/bleach.6j9/lz7wvq -s 100 -e 130 -n https://www.animefillerlist.com/shows/bleach -d "D:\Anime\Bleach" -t 3 -f False
+./anime-dl.py -u https://9anime.to/watch/bleach.6j9/lz7wvq -s 100 -e 130 -n https://www.animefillerlist.com/shows/bleach -d "D:\Anime\Bleach" -t 3 -f False
```
Explanation of commands :
@@ -172,7 +199,7 @@ Now we have what we need!
All you have to do is, add -c or --code command to the previous example's code like below
```bash
-./anime-dl.exe -u https://9anime.to/watch/bleach.6j9/lz7wvq -s 100 -e 130 -n https://www.animefillerlist.com/shows/bleach -d "D:\Anime\Bleach" -t 4 -f False -c 03AERD8Xode9TV-gFkG-7CNkllpKoiXfDKVEZ0Lu9NjGpxVv89bjwNHkS5bcfXHqKXx746tsNW_IUMhSVV7Aym-lcvdn6jd5Ggy1a28AQ_BI1K380joLpYReKB0EOjJjO2oVEUpOgtPu0fgfjxABKpI9EjrDZ0T7iSsKDPfhnXebQcZxIbAwelADkZ8m4qYojn3J_-kQyreIRCEztWyTTpm_SoNt6lIpFxG-egDFqVF6Sg7ICPp0QQrPa5UC-6pecgs_3xspg7PN48VOXGfHH4PCARIaGVL-J5CYNsesqUuZ4t_4kni9euduhtB3KCrV1_IYOhymepwczWIKKPGmze2DKVddoDBABlS8NZaxHRFAzNjjJHOhlRyblBMlmerK_Mu5N25bZeY5ZZ
+./anime-dl.py -u https://9anime.to/watch/bleach.6j9/lz7wvq -s 100 -e 130 -n https://www.animefillerlist.com/shows/bleach -d "D:\Anime\Bleach" -t 4 -f False -c 03AERD8Xode9TV-gFkG-7CNkllpKoiXfDKVEZ0Lu9NjGpxVv89bjwNHkS5bcfXHqKXx746tsNW_IUMhSVV7Aym-lcvdn6jd5Ggy1a28AQ_BI1K380joLpYReKB0EOjJjO2oVEUpOgtPu0fgfjxABKpI9EjrDZ0T7iSsKDPfhnXebQcZxIbAwelADkZ8m4qYojn3J_-kQyreIRCEztWyTTpm_SoNt6lIpFxG-egDFqVF6Sg7ICPp0QQrPa5UC-6pecgs_3xspg7PN48VOXGfHH4PCARIaGVL-J5CYNsesqUuZ4t_4kni9euduhtB3KCrV1_IYOhymepwczWIKKPGmze2DKVddoDBABlS8NZaxHRFAzNjjJHOhlRyblBMlmerK_Mu5N25bZeY5ZZ
```
### Recaptcha does not appear even in private browsing. What can I do?
@@ -282,9 +309,13 @@ This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md
Anime Downloader wouldn't be possible without these awesome free and opensource projects!
- [CloudScraper](https://github.com/VeNoMouS/cloudscraper)
- [Js2Py](https://github.com/PiotrDabkowski/Js2Py)
+- [JsBeautifier](https://github.com/beautify-web/js-beautify)
+- [PyCryptodome](https://github.com/Legrandin/pycryptodome)
- [PySimpleGUI](https://github.com/PySimpleGUI/PySimpleGUI)
- [FFMPEG](https://ffmpeg.org/)
+Special thanks to [u/sln0913](https://www.reddit.com/user/sln0913) for the awesome logo and banner designs!
+
## Disclaimer
This software has been developed only for educational purposes by the [Author](https://github.com/Oshan96). By no means this encourage content piracy. Please support original content creators!
diff --git a/anime_downloader/Anime_Downloader.py b/anime_downloader/Anime_Downloader.py
index 3b7df9c..5a1db58 100644
--- a/anime_downloader/Anime_Downloader.py
+++ b/anime_downloader/Anime_Downloader.py
@@ -5,12 +5,14 @@
import shutil
import os
import sys
+import traceback
from platform import system
from threading import Thread
from queue import Queue
from art import text2art
from util import Color
from util.ffmpeg_downloader import FFMPEGDownloader
+from util.hls_downloader import HLSDownloader
from scrapers.nineanime import Anime_Scraper
directory = ""
@@ -72,6 +74,9 @@ def __clean_file_name(self, file_name):
return file_name
def __download_episode(self, episode):
+ if system() == "Windows":
+ episode.title = self.__clean_file_name(episode.title)
+
if episode.is_direct:
if episode.download_url is None:
Color.printer("ERROR", "Download URL is not set for " + episode.episode + ", skipping...", self.gui)
@@ -79,9 +84,6 @@ def __download_episode(self, episode):
Color.printer("INFO", "Downloading " + episode.episode + "...", self.gui)
- if system() == "Windows":
- episode.title = self.__clean_file_name(episode.title)
-
# print(self.is_titles)
# print(episode.title)
@@ -92,15 +94,21 @@ def __download_episode(self, episode):
# print("without title")
file_name = self.directory + episode.episode + ".mp4"
- with requests.get(episode.download_url, stream=True, verify=False) as r:
+ with requests.get(episode.download_url, headers=episode.request_headers, stream=True, verify=False) as r:
with open(file_name, 'wb') as f:
shutil.copyfileobj(r.raw, f, length=16 * 1024 * 1024)
Color.printer("INFO", episode.episode + " finished downloading...", self.gui)
else:
- Color.printer("INFO", "HLS link found. Using FFMPEG to download...", self.gui)
- FFMPEGDownloader(episode, self.directory, self.gui).download()
+ Color.printer("INFO", "HLS link found. Using custom HLSDownloader to download...", self.gui)
+ try:
+ HLSDownloader(episode, self.directory, requests.session(), self.gui).download()
+ except Exception as ex:
+ trace = traceback.format_exc()
+ print(trace)
+ Color.printer("ERROR", "Custom HLS Downloader failed! Using FFMPEG to download...", self.gui)
+ FFMPEGDownloader(episode, self.directory, self.gui).download()
def download(self):
diff --git a/anime_downloader/app.ico b/anime_downloader/app.ico
new file mode 100644
index 0000000..1c19a2d
Binary files /dev/null and b/anime_downloader/app.ico differ
diff --git a/anime_downloader/app.png b/anime_downloader/app.png
new file mode 100644
index 0000000..f90c77e
Binary files /dev/null and b/anime_downloader/app.png differ
diff --git a/anime_downloader/extractors/base_extractor.py b/anime_downloader/extractors/base_extractor.py
index b25d4c4..504325e 100644
--- a/anime_downloader/extractors/base_extractor.py
+++ b/anime_downloader/extractors/base_extractor.py
@@ -3,7 +3,10 @@ class BaseExtractor:
def __init__(self, url, session):
self.url = url
- self.session = session
+ if session is None:
+ self.session = cloudscraper.create_scraper()
+ else:
+ self.session = session
def extract_page_content(self):
video_page = self.session.get(self.url)
diff --git a/anime_downloader/extractors/jwplayer_extractor.py b/anime_downloader/extractors/jwplayer_extractor.py
index 940ff15..06718ea 100644
--- a/anime_downloader/extractors/jwplayer_extractor.py
+++ b/anime_downloader/extractors/jwplayer_extractor.py
@@ -33,7 +33,12 @@ def extract_direct_url(self):
# if the given resolution is not found, the first available link would be given
def get_resolution_link(self, master_url, resolution):
count = 0
- content = self.session.get(master_url).text
+ try:
+ content = self.session.get(master_url).text
+ except:
+ print("retry")
+ content = self.session.get(master_url).text
+
data_list = content.split("\n")
link = None
diff --git a/anime_downloader/gui/GUI.py b/anime_downloader/gui/GUI.py
index d72d92e..46c393a 100644
--- a/anime_downloader/gui/GUI.py
+++ b/anime_downloader/gui/GUI.py
@@ -1,6 +1,8 @@
+import sys
import queue
import json
import cloudscraper
+import traceback
import PySimpleGUI as sg
from threading import Thread
from time import sleep
@@ -10,16 +12,19 @@
from scrapers.fouranime.fouranime_scraper import FourAnimeScraper
from scrapers.nineanime.nineanime_scraper import NineAnimeScraper
from scrapers.animeultima.animeultima_scraper import AnimeUltimaScraper
+from scrapers.animeflix.animeflix_scraper import AnimeFlixScraper
from scrapers.animepahe.animepahe_scraper import AnimePaheScraper
from scrapers.gogoanime.gogoanime_scraper import GoGoAnimeScraper
from scrapers.animefreak.animefreak_scraper import AnimeFreakScraper
+from scrapers.twist.twist_scraper import TwistScraper
sg.theme('Dark Amber')
i = 0
max_val = 100
-def download(anime_url, names_url, start_epi, end_epi, is_filler, is_titles, token, threads, directory, gui, resolution="720", is_dub=False):
+def download(anime_url, names_url, start_epi, end_epi, is_filler, is_titles, token, threads, directory, gui,
+ resolution="720", is_dub=False):
global max_val
session = cloudscraper.create_scraper()
@@ -41,6 +46,10 @@ def download(anime_url, names_url, start_epi, end_epi, is_filler, is_titles, tok
printer("INFO", "AnimeUltima URL detected...", gui)
scraper = AnimeUltimaScraper(anime_url, start_epi, end_epi, session, gui, resolution, is_dub)
+ elif "animeflix" in anime_url:
+ printer("INFO", "AnimeFlix URL detected...", gui)
+ scraper = AnimeFlixScraper(anime_url, start_epi, end_epi, session, gui, resolution, is_dub)
+
elif "gogoanime" in anime_url:
printer("INFO", "GoGoAnime URL detected...", gui)
if "gogoanime.pro" in anime_url:
@@ -53,6 +62,10 @@ def download(anime_url, names_url, start_epi, end_epi, is_filler, is_titles, tok
printer("INFO", "AnimeFreak URL detected...", gui)
scraper = AnimeFreakScraper(anime_url, start_epi, end_epi, session, gui, is_dub)
+ elif "twist" in anime_url:
+ printer("INFO", "Twist URL detected...", gui)
+ scraper = TwistScraper(anime_url, start_epi, end_epi, session, gui)
+
elif "animepahe.com" in anime_url:
printer("INFO", "AnimePahe URL detected...", gui)
api_key = ""
@@ -93,7 +106,8 @@ def download(anime_url, names_url, start_epi, end_epi, is_filler, is_titles, tok
if episodes:
if is_titles:
printer("INFO", "Setting episode titles...", gui)
- episodes = EpisodeNamesCollector(names_url, start_epi, end_epi, is_filler, episodes).collect_episode_names()
+ episodes = EpisodeNamesCollector(names_url, start_epi, end_epi, is_filler,
+ episodes).collect_episode_names()
else:
printer("ERROR", "Failed to retrieve download links!", gui)
@@ -105,6 +119,8 @@ def download(anime_url, names_url, start_epi, end_epi, is_filler, is_titles, tok
downloader.download()
except Exception as ex:
+ trace = traceback.format_exc()
+ print(trace)
printer("ERROR", ex, gui)
printer("ERROR", "Something went wrong! Please close and restart Anime Downloader to retry!", gui)
@@ -124,16 +140,18 @@ def create_ui(self):
[sg.Text("Save To", size=(25, 1), text_color="white"), sg.InputText(key="location"), sg.FolderBrowse()],
[sg.Text("Episodes Details", size=(15, 1)), sg.Text("_" * 60, pad=(0, 15))],
- [sg.Text("From", text_color="white"), sg.InputText(key="start_epi", size=(5, 1)),
- sg.Text("To", text_color="white"), sg.InputText(key="end_epi", size=(5, 1)),
+ [sg.Text("From", text_color="white", size=(8, 1)), sg.InputText(key="start_epi", size=(6, 1)),
+ sg.Text("To", text_color="white", size=(8, 1)), sg.InputText(key="end_epi", size=(5, 1)),
sg.Text("Download Fillers?", text_color="white"),
sg.Combo(["Yes", "No"], size=(4, 1), default_value="Yes", key="isFiller"),
sg.Text("Threads", text_color="white"),
- sg.Spin([i for i in range(1, 21)], initial_value=1, size=(3, 1), key="threads"),
- sg.Text("Resolution", text_color="white"),
- sg.Combo(["240", "360", "480", "720", "1080"], size=(4, 1), default_value="1080", key="resolution")],
+ sg.Spin([i for i in range(1, 21)], initial_value=1, size=(3, 1), key="threads")],
+ [],
+ [sg.Text("Resolution", text_color="white", size=(8, 1)),
+ sg.Combo(["240", "360", "480", "720", "1080"], size=(4, 1), default_value="1080", key="resolution"),
+ sg.Text("Sub/Dub", text_color="white", size=(8, 1)),
+ sg.Combo(["Sub", "Dub"], size=(4, 1), default_value="Sub", key="is_dub")],
[],
-
[sg.Text("Optional Settings (Fill this if you don't have 2captcha key)", size=(45, 1)),
sg.Text("_" * 25, pad=(0, 15))],
[sg.Text("Recaptcha Token (Optional)", text_color="white", size=(25, 1)),
@@ -147,7 +165,10 @@ def create_ui(self):
[sg.ProgressBar(100, key="progress", orientation="h", size=(45, 15))]
]
- self.window = sg.Window("Anime Downloader v1.0.3", layout)
+ if sys.platform.lower() == "win32":
+ self.window = sg.Window("Monkey-DL v1.0.4", layout, icon="app.ico")
+ else:
+ self.window = sg.Window("Monkey-DL v1.0.4", layout, icon="app.png")
def check_messages(self, values):
global i, max_val
@@ -162,7 +183,7 @@ def check_messages(self, values):
txt += "\n" + message
if "finished downloading..." in message or "failed to download!" in message:
- i+=1
+ i += 1
self.window["progress"].UpdateBar(i, max=max_val)
self.window['txt_msg'].update(txt)
@@ -188,6 +209,7 @@ def run(self):
names_url = values["names_url"]
is_titles = True if names_url != "" else False
is_filler = True if values["isFiller"] == "Yes" else False
+ is_dub = True if values["is_dub"] == "Dub" else False
tok = values["token"].rstrip()
token = tok if tok != "" else None
@@ -213,7 +235,9 @@ def run(self):
self.window["txt_msg"].update("")
self.window.refresh()
- thread = Thread(target=download, args=(anime_url, names_url, start_epi, end_epi, is_filler, is_titles, token, threads, directory, self, resolution), daemon=True)
+ thread = Thread(target=download, args=(
+ anime_url, names_url, start_epi, end_epi, is_filler, is_titles, token, threads, directory, self,
+ resolution, is_dub), daemon=True)
thread.start()
self.check_messages(values)
diff --git a/anime_downloader/scrapers/animeflix/__init__.py b/anime_downloader/scrapers/animeflix/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/anime_downloader/scrapers/animeflix/animeflix_scraper.py b/anime_downloader/scrapers/animeflix/animeflix_scraper.py
new file mode 100644
index 0000000..ecf8bab
--- /dev/null
+++ b/anime_downloader/scrapers/animeflix/animeflix_scraper.py
@@ -0,0 +1,105 @@
+import re
+import traceback
+from scrapers.base_scraper import BaseScraper
+from util.Episode import Episode
+from extractors.jwplayer_extractor import JWPlayerExtractor
+
+
+class AnimeFlixScraper(BaseScraper):
+ def __init__(self, url, start_episode, end_episode, session, gui=None, resolution="720", is_dub=False):
+ super().__init__(url, start_episode, end_episode, session, gui)
+ self.resolution = resolution
+ self.is_dub = is_dub
+ url_data = re.search("(.*)/shows/(.*)", self.url)
+ self.url_base = url_data.group(1)
+ self.slug = url_data.group(2).split("/")[0]
+ self.extractor = JWPlayerExtractor(None, None)
+
+ self.anime_id = None
+ self.__set_anime_id()
+
+ def __set_anime_id(self):
+ api_url = "{base}/api/anime/detail?slug={slug}".format(base=self.url_base, slug=self.slug)
+ data = self.session.get(api_url).json()
+ self.anime_id = data["data"]["id"]
+
+ def __get_start_end_page(self):
+ limit = 50
+
+ api_url = "{base}/api/episodes?anime_id={id}&limit={limit}".format(base=self.url_base, id=self.anime_id,
+ limit=str(limit))
+ data = self.session.get(api_url).json()
+
+ last_page = data["meta"]["last_page"]
+
+ start_page = ((self.start_episode - 1) // limit) + 1
+ end_page = ((self.end_episode - 1) // limit) + 1
+
+ if end_page > last_page:
+ end_page = last_page
+
+ return start_page, last_page, limit
+
+ def __set_download_link(self, episode):
+ api_url = "{base}/api/videos?episode_id={id}".format(base=self.url_base, id=str(episode.id))
+ url_data = self.session.get(api_url).json()
+ for src_data in url_data:
+ if self.is_dub:
+ if src_data["lang"] == "dub" and src_data["type"] != "hls":
+ episode.download_url = src_data["file"]
+ return
+ else:
+ if src_data["lang"] == "sub" and src_data["hardsub"] and src_data["type"] == "hls":
+ master = src_data["file"]
+ # print("master")
+ # print(master)
+ res_stream_link = self.extractor.get_resolution_link(master, self.resolution)
+ episode.download_url = res_stream_link
+ episode.is_direct = False
+ return
+
+ def __collect_episodes(self):
+ if self.anime_id is None:
+ return None
+
+ episodes = []
+
+ start_page, end_page, limit = self.__get_start_end_page()
+ curr_page = start_page
+ while curr_page <= end_page:
+ api_url = "{base}/api/episodes?anime_id={id}&limit={limit}&page={page}".format(base=self.url_base,
+ id=self.anime_id,
+ limit=str(limit),
+ page=str(curr_page))
+ curr_page += 1
+
+ api_data = self.session.get(api_url).json()
+ for epi in api_data["data"]:
+ epi_no = int(epi["episode_num"])
+
+ if epi_no < self.start_episode or epi_no > self.end_episode:
+ continue
+
+ if self.is_dub and epi["dub"] == 0:
+ print("No dubbed version for Episode - {epi}".format(epi=str(epi_no)))
+ continue
+
+ title = epi["title"]
+ id = epi["id"]
+ episode = Episode(title, "Episode - {epi}".format(epi=str(epi_no)))
+ episode.id = id
+
+ self.__set_download_link(episode)
+
+ episodes.append(episode)
+
+ return episodes
+
+ def get_direct_links(self):
+ try:
+ episodes = self.__collect_episodes()
+ return episodes
+ except Exception as ex:
+ trace = traceback.format_exc()
+ print(trace)
+ return None
diff --git a/anime_downloader/scrapers/animeultima/animeultima_scraper.py b/anime_downloader/scrapers/animeultima/animeultima_scraper.py
index 5cd2a5f..5b5d791 100644
--- a/anime_downloader/scrapers/animeultima/animeultima_scraper.py
+++ b/anime_downloader/scrapers/animeultima/animeultima_scraper.py
@@ -1,7 +1,10 @@
+import re
+import traceback
from bs4 import BeautifulSoup
from scrapers.base_scraper import BaseScraper
from util.Episode import Episode
from extractors.jwplayer_extractor import JWPlayerExtractor
+from util.js_unpacker import JsUnpacker
class AnimeUltimaScraper(BaseScraper):
@@ -11,7 +14,7 @@ def __init__(self, url, start_episode, end_episode, session, gui=None, resolutio
self.is_dub = is_dub
self.resolution = resolution
self.base_url = "https://www1.animeultima.to"
- self.extractor = JWPlayerExtractor(None, self.session)
+ self.extractor = JWPlayerExtractor(None, None)
def get_anime_id(self):
page = self.session.get(self.url).content
@@ -30,6 +33,8 @@ def get_anime_id(self):
content_data = meta_tag["content"].split("/")
return content_data[-2]
+ return None
+
def get_start_and_end_page(self, anime_id):
# print("start end page")
start_page = 0
@@ -68,6 +73,21 @@ def get_page_url(self, url):
return None
+ def set_stream_url(self, episode):
+ # print("set stream")
+ self.extractor.url = episode.page_url
+ stream_url = self.extractor.extract_stream_link(self.resolution)
+ print("Stream URL : " + stream_url)
+ episode.download_url = stream_url
+
+ def set_direct_url(self, episode, page_url):
+ page = self.session.get(page_url).text
+ func = re.search("eval\(.*\)", page).group(0)
+ eval_data = JsUnpacker().eval(func)
+ link = re.search('fone\s+=\s+\"(.*)\"', eval_data).group(1)
+ # print(link)
+ episode.download_url = link
+
def collect_episodes(self, anime_id, start_page, end_page):
# print("collect epis")
base_url = "https://www1.animeultima.to/api/episodeList?animeId=" + anime_id + "&page="
@@ -105,8 +125,13 @@ def collect_episodes(self, anime_id, start_page, end_page):
episode = Episode(title, "Episode - " + str(epi_no))
episode.page_url = page_url
- episode.is_direct = False
- self.set_stream_url(episode)
+ # print(episode.page_url)
+ if "animeultima.to/e/" not in page_url:
+ episode.is_direct = False
+ self.set_stream_url(episode)
+ else:
+ print("Only direct url found, will use default resolution to download")
+ self.set_direct_url(episode, page_url)
self.episodes.append(episode)
@@ -114,33 +139,25 @@ def collect_episodes(self, anime_id, start_page, end_page):
page_counter += 1
- def set_stream_url(self, episode):
- # print("set stream")
- self.extractor.url = episode.page_url
- stream_url = self.extractor.extract_stream_link(self.resolution)
- print("Stream URL : " + stream_url)
- episode.download_url = stream_url
-
- def set_stream_urls(self):
- extractor = JWPlayerExtractor(None, self.session)
- for episode in self.episodes:
- extractor.url = episode.page_url
- stream_url = extractor.extract_stream_link(self.resolution)
- episode.dowload_url = stream_url
-
def get_direct_links(self):
# print("direct links")
anime_id = self.get_anime_id()
- start_page, end_page = self.get_start_and_end_page(anime_id)
-
- # print(anime_id)
- # print(start_page, end_page)
+ # print("anime id :", anime_id)
+ if anime_id is None:
+ anime_id = self.get_anime_id()
+ if anime_id is None:
+ anime_id = self.get_anime_id()
try:
+ # print(anime_id)
+ start_page, end_page = self.get_start_and_end_page(anime_id)
+
+ # print(start_page, end_page)
self.collect_episodes(anime_id, start_page, end_page)
return self.episodes
except Exception as ex:
- print(ex)
+ trace = traceback.format_exc()
+ print(trace)
return None
diff --git a/anime_downloader/scrapers/twist/__init__.py b/anime_downloader/scrapers/twist/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/anime_downloader/scrapers/twist/twist_scraper.py b/anime_downloader/scrapers/twist/twist_scraper.py
new file mode 100644
index 0000000..5231f0f
--- /dev/null
+++ b/anime_downloader/scrapers/twist/twist_scraper.py
@@ -0,0 +1,62 @@
+import re
+from scrapers.base_scraper import BaseScraper
+from util.Color import printer
+from util.Episode import Episode
+from scrapers.twist.twist_source_decryptor import TwistSourceDecryptor
+
+
+class TwistScraper(BaseScraper):
+ def __init__(self, url, start_episode, end_episode, session, gui=None):
+ super().__init__(url, start_episode, end_episode, session, gui)
+
+ url_data = re.search("(.*)/a/(.*)", self.url)
+ print(url_data.group(2))
+ self.anime_name = url_data.group(2).split("/")[0]
+ self.twist_url_base = url_data.group(1)
+
+ self.head = {"x-access-token": "1rj2vRtegS8Y60B3w3qNZm5T2Q0TN2NR"}
+
+ def __get_source_data(self):
+ sources_url = "{base_url}/api/anime/{anime_name}/sources".format(base_url=self.twist_url_base, anime_name=self.anime_name)
+ return self.session.get(sources_url, headers=self.head).json()
+
+ def __extract_download_urls(self):
+ episodes = []
+ epi_data = self.__get_source_data()
+ for epi in epi_data:
+ epi_no = epi["number"]
+
+ if epi_no < self.start_episode or epi_no > self.end_episode:
+ continue
+
+ episode = Episode("Episode - " + str(epi_no), "Episode - " + str(epi_no))
+ url = "https://twistcdn.bunny.sh" + TwistSourceDecryptor(epi["source"]).decrypt()
+
+ episode.download_url = self.session.get(url, headers={"referer": self.twist_url_base}, allow_redirects=False).headers["location"]
+
+ episode.request_headers = {"referer": "{base}/a/{name}/{id}".format(base=self.twist_url_base, name=self.anime_name, id=str(epi_no))}
+
+ print(episode.download_url)
+
+ episodes.append(episode)
+
+ return episodes
+
+ def get_direct_links(self):
+ try:
+ episodes = self.__extract_download_urls()
+ if len(episodes) > 0:
+ return episodes
+ else:
+ return None
+
+ except Exception as ex:
+ printer("ERROR", str(ex), self.gui)
+ return None
+
+
+# if __name__ == "__main__":
+# import cloudscraper as cs
+#
+# for e in TwistScraper("https://twist.moe/a/one-piece/1", 900, 927, cs.create_scraper()).get_direct_links():
+# print(e.title, ":", e.download_url)
diff --git a/anime_downloader/scrapers/twist/twist_source_decryptor.py b/anime_downloader/scrapers/twist/twist_source_decryptor.py
new file mode 100644
index 0000000..b5067d2
--- /dev/null
+++ b/anime_downloader/scrapers/twist/twist_source_decryptor.py
@@ -0,0 +1,55 @@
+from requests.utils import requote_uri
+from base64 import b64decode
+from hashlib import md5
+from Crypto.Cipher import AES
+
+
+class TwistSourceDecryptor:
+ BLOCK_SIZE = 16
+ SECRET_KEY = b'LXgIVP&PorO68Rq7dTx8N^lP!Fa5sGJ^*XK'
+
+ def __init__(self, enc_src):
+ self.enc_src = enc_src.encode('utf-8')
+
+ def __pad(self, data):
+ length = self.BLOCK_SIZE - (len(data) % self.BLOCK_SIZE)
+ return data + (chr(length) * length).encode()
+
+ def __unpad(self, data):
+ # print(data[-1])
+ return data[:-(data[-1] if type(data[-1]) == int else ord(data[-1]))]
+
+ def __get_key_iv(self, data, salt, output=48):
+ assert len(salt) == 8, len(salt)
+ data += salt
+ key = md5(data).digest()
+ key_iv_data = key
+ while len(key_iv_data) < output:
+ key = md5(key + data).digest()
+ key_iv_data += key
+
+ return key_iv_data[:output]
+
+ def decrypt(self):
+ enc_data = b64decode(self.enc_src)
+ # print("b64decode enc :", enc_data)
+ assert enc_data[:8] == b'Salted__'
+
+ salt = enc_data[8:16] # 8byte salt
+ key_iv = self.__get_key_iv(self.SECRET_KEY, salt) # key+iv is 48bytes
+ key = key_iv[:32] # key is 32byte
+ iv = key_iv[32:] # 16byte iv
+ # print("key :", key)
+ # print("iv :", iv)
+
+ aes = AES.new(key, AES.MODE_CBC, iv)
+
+ decrypt_data = aes.decrypt(enc_data[16:]) # actual data are after first 16bytes (which is salt)
+ decrypt_data = self.__unpad(decrypt_data).decode('utf-8').lstrip(' ')
+ # print(decrypt_data)
+ return requote_uri(decrypt_data) # parse to url safe value
+
+# if __name__ == "__main__":
+# enc = "U2FsdGVkX19HQClvPEOzwC/GB0VRwqWykgOTB+xGwpi7Tu6uTdSUbBsiKOJ5KH0udjYE/10xinA7Km/nGm88txhTYb/oqSksAaBBV8xM0XQ="
+# dec = TwistSourceDecryptor(enc).decrypt()
+# print(dec)
diff --git a/anime_downloader/util/Episode.py b/anime_downloader/util/Episode.py
index ce44fd1..dc77b9c 100644
--- a/anime_downloader/util/Episode.py
+++ b/anime_downloader/util/Episode.py
@@ -11,6 +11,7 @@ def __init__(self, title, episode):
self.page_url = None
self.download_url = None
self.is_direct = True
+ self.request_headers = {}
def extract_episode_names(url, is_filler, start_epi, end_epi, gui=None):
diff --git a/anime_downloader/util/hls_downloader.py b/anime_downloader/util/hls_downloader.py
new file mode 100644
index 0000000..73dd8b0
--- /dev/null
+++ b/anime_downloader/util/hls_downloader.py
@@ -0,0 +1,99 @@
+import re
+from util.Color import printer
+from Crypto.Cipher import AES
+
+
+class HLSDownloader:
+ def __init__(self, episode, directory, session, gui=None):
+ self.episode = episode
+ self.directory = directory
+ self.session = session
+ self.gui = gui
+ self.count = 0
+
+ def __get_default_iv(self):
+ """When IV is not passed, m3u8 use incremental 16byte iv key starting from 1 for each segment"""
+ self.count += 1
+ return self.count.to_bytes(16, 'big')
+
+ def __decrypt(self, data, key, iv=None):
+ if iv is None:
+ iv = self.__get_default_iv() # get the default iv value for the segment
+
+ # print(iv)
+ decryptor = AES.new(key, AES.MODE_CBC, IV=iv)
+ return decryptor.decrypt(data)
+
+ def __collect_stream_data(self, ts_url):
+ return self.session.get(ts_url).content
+
+ def __is_encrypted(self, m3u8_data):
+ method = re.search('#EXT-X-KEY:METHOD=(.*),', m3u8_data)
+ if method is None:
+ return False
+
+ if method.group(1) == "NONE":
+ return False
+
+ return True
+
+ def __collect_uri_iv(self, m3u8_data):
+ # print(m3u8_data)
+ uri_iv = re.search('#EXT-X-KEY:METHOD=AES-128,URI="(.*)",IV=(.*)', m3u8_data)
+
+ if uri_iv is None:
+ uri_data = re.search('#EXT-X-KEY:METHOD=AES-128,URI="(.*)"', m3u8_data)
+ return uri_data.group(1), None
+
+ uri = uri_iv.group(1)
+ iv = uri_iv.group(2)
+
+ return uri, iv
+
+ def __collect_ts_urls(self, m3u8_data):
+ urls = [url.group(0) for url in re.finditer("https://(.*)\.ts(.*)", m3u8_data)]
+ if len(urls) == 0:
+ print("Relative paths")
+ base_url = re.search("(.*)/\S+\.m3u8", self.episode.download_url).group(1)
+ urls = [base_url + "/" + url.group(0) for url in re.finditer("(.*)\.ts(.*)", m3u8_data)]
+
+ return urls
+
+ def download(self):
+ key = ""
+ iv = None
+
+ print(self.episode.download_url)
+ m3u8_data = self.session.get(self.episode.download_url).text
+
+ is_encrypted = self.__is_encrypted(m3u8_data)
+ if is_encrypted:
+ key_uri, iv = self.__collect_uri_iv(m3u8_data)
+ # print("uri, iv :", key_uri, iv)
+ key = self.__collect_stream_data(key_uri)
+ # print("key :", key)
+
+ ts_urls = self.__collect_ts_urls(m3u8_data)
+ # print("ts_urls :", ts_urls)
+
+ with open(self.directory + self.episode.title + ".mp4", "wb") as epi_file:
+ for ts_url in ts_urls:
+ print("Processing ts file :", ts_url)
+ ts_data = self.__collect_stream_data(ts_url)
+ # print("ts data:", ts_data)
+ if is_encrypted:
+ # print("encrypted")
+ ts_data = self.__decrypt(ts_data, key, iv)
+ # print("decrypted")
+ # print("writing")
+ epi_file.write(ts_data)
+
+# if __name__ == "__main__":
+# import cloudscraper as cs
+# from util.Episode import Episode
+# session = cs.create_scraper()
+#
+# epi = Episode("Test", "Test")
+# epi.download_url = "https://v.vrv.co/evs/cf73f5561410a8a7e412491991f0d508/assets/9c7l0i4fq5tnq7u_1055213.mp4/index-v1-a1.m3u8?Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly92LnZydi5jby9ldnMvY2Y3M2Y1NTYxNDEwYThhN2U0MTI0OTE5OTFmMGQ1MDgvYXNzZXRzLzljN2wwaTRmcTV0bnE3dV8qIiwiQ29uZGl0aW9uIjp7IkRhdGVMZXNzVGhhbiI6eyJBV1M6RXBvY2hUaW1lIjoxNTg2NjMyOTcyfX19XX0_&Signature=J6o4tQBJKxqNc95v5dhkT7mFEVPQwDZLMBBNsqf2JGcDPCyWvWHunR6nsPSgeWT5nVGuPx2o2fh3OrAJERtQuRMJDpUSCZzOX0Jyajvwb6ot9SAuusXKg4nO7em~CiF2MqQURSChhLHPhiEEmUPD0wlB5WzVWnPg9XxLIX0RMgIXARtDhRrlvL0K-oXEJIv2IWyKe9MoTy99lj1vZmeNy4WKC~opWfXImvRmRReGQyi1Kvr6Wl6fll6oPNFnqq~2CBGPNB5TFQlDr4TnZPHcJasoN3m9OMJuga9SAXi0Td7-klw78dge4z2leQC88QA9aFFGDHqnevDecjAyIQEPZA__&Key-Pair-Id=APKAJMWSQ5S7ZB3MF5VA"
+#
+# HLSDownloader(epi, "", session).download()
diff --git a/anime_downloader/util/js_unpacker.py b/anime_downloader/util/js_unpacker.py
index 804ea96..9b97528 100644
--- a/anime_downloader/util/js_unpacker.py
+++ b/anime_downloader/util/js_unpacker.py
@@ -35,3 +35,10 @@ def extract_link(self, func):
# print(src)
return src
+
+# if __name__ == "__main__":
+# fun = '''
+# eval(function(p,a,c,k,e,d){e=function(c){return(c35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--){d[e(c)]=k[c]||e(c)}k=[function(e){return d[e]}];e=function(){return'\\w+'};c=1};while(c--){if(k[c]){p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c])}}return p}('3 9="5://1d.r.2/i.o.2/f/8/e.j?u=y%v.F.z.2&A=B&C=D%x%E%G%H%I%J%K%L%M%w%k%l%m%a%a";3 c="5://p-q-h.i.2/s/f/8/e.j/O.15";3 4=[{b:9},{b:c}];!6 t(){17{!6 t(n){1===(""+n/n).N&&n%19!=0||6(){}.1a("1b")(),t(++n)}(0)}1c(n){1e(t,1f)}}();3 7=1g(\'7\');7.1h({1i:\'16 14 (8)Z 1\',Y:\'d%\',X:\'d%\',W:\'5://V.U.T/h-S/R.Q\',18:{},4:4,});',62,81,'||com|var|sources|https|function|player|dub|fone|3D|file|ftwo|100|1_1807|116|span|vod|auengine|mp4|2Fo|2B1m9o4E21cFRaGmn6sqip5a0cGoab1lPjNlUB7s07TRdZ|2BVFXLfoXNCRLQ||appspot|s1|na|googleapis|hls2||GoogleAccessId|40auengine|2Bdb6DVVQ7nTjJ9jgqnmJmEHSdEmv6019e0YBwh|2BsspInuB|auevod|gserviceaccount|Expires|1586640060|Signature|SO9JeiA|2FZljXmaCZFHiwn1miyMU|iam|2FYBkDGoICGeUlaWqpe20SqAHBnillgfl03rc|2FxeI3MoCQn4Eps3gaxHdgp7GjXPIRWxHVAr1uMzuX5RiNvLGarTrndZ|2BxL9B0kpK2rgHf5BEQoQgwuAOhwltn30ikKP0nNYMQj64mErafXWkmvP90t8Qhom|2BUOhgjXpJcZqW9fOly|2BDqttGgU96b|2F0Z3u0m1rYGvtpYJ8M9QcBZM|2F|length|playlist|br|jpg|vs3b1Z2XacICszjz|thumbnails|tv|animeultima|cdn|image|height|width|Episode|12px|size|font|style|Piece|m3u8|One|try|cast|20|constructor|debugger|catch|storage|setTimeout|1e3|jwplayer|setup|title'.split('|'),0,{}))
+# '''
+#
+# print(JsUnpacker().extract_link(fun))
\ No newline at end of file
diff --git a/docs/images/banner.png b/docs/images/banner.png
new file mode 100644
index 0000000..933e4a4
Binary files /dev/null and b/docs/images/banner.png differ
diff --git a/docs/images/gui.png b/docs/images/gui.png
index 02419a1..0053ca5 100644
Binary files a/docs/images/gui.png and b/docs/images/gui.png differ
diff --git a/requirements.txt b/requirements.txt
index 571700c..444e450 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,5 +1,6 @@
+pycryptodome==3.9.7
jsbeautifier
-requests==2.22.0
+requests==2.23.0
art==4.5
cloudscraper==1.2.33
beautifulsoup4==4.8.2
diff --git a/setup.py b/setup.py
new file mode 100644
index 0000000..e69de29