Skip to content

Commit

Permalink
Removed: obsolete parts.
Browse files Browse the repository at this point in the history
  • Loading branch information
im-n1 committed Oct 12, 2021
1 parent fdbff08 commit 3481e63
Show file tree
Hide file tree
Showing 7 changed files with 598 additions and 574 deletions.
19 changes: 4 additions & 15 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,6 @@ need to install dependencies you don't need. Therefore this library utilizes
extras which install optional dependencies:

* for Google trends - google
* for Twitter scraping - twitter

Usage
-----
Expand All @@ -45,9 +44,7 @@ Usage
.. code-block:: bash
pip install karpet # Basics only
pip install karpet[twitter] # For Twitter scraping
pip install karpet[google] # For Google trends
pip install karpet[twitter,google] # All features
2. Import the library class first.

Expand Down Expand Up @@ -83,18 +80,6 @@ Retrieves exchange list.
k.fetch_crypto_exchanges("nrg")
['DigiFinex', 'KuCoin', 'CryptoBridge', 'Bitbns', 'CoinExchange']
.. fetch_tweets()
.. ~~~~~~~~~~~~~~
.. Retrieves twitter tweets.
.. .. code-block:: python
.. k = Karpet(date(2019, 1, 1), date(2019, 5, 1))
.. df = k.fetch_tweets(kw_list=["bitcoin"], lang="en") # Dataframe with tweets.
.. df.head()
.. .. image:: https://raw.githubusercontent.com/im-n1/karpet/master/assets/tweets.png
fetch_google_trends()
~~~~~~~~~~~~~~~~~~~~~
Retrieves Google Trends - in percents for the given date range.
Expand Down Expand Up @@ -216,6 +201,10 @@ available.
Changelog
---------

0.4.4
~~~~~
- remove obsolete parts of the code and some dependencies

0.4.3
~~~~~
- fixed ``get_basic_data()`` method (different data source)
Expand Down
78 changes: 0 additions & 78 deletions karpet/core.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
try:
from pytrends.request import TrendReq
from twitterscraper import query_tweets
except:
pass

Expand Down Expand Up @@ -273,83 +272,6 @@ def fetch_google_trends(

return df.sort_values("date").reset_index(drop=True)

# def fetch_tweets(self, kw_list, lang, limit=None):
# """
# Scrapes Twitter without any limits and returns dataframe with the
# following structure

# * fullname
# * id
# * likes
# * replies
# * retweets
# * text
# * timestamp
# * url
# * user
# * date
# * has_link

# :param list kw_list: List of keywords to search for. Will be joined with "OR" operator.
# :param str lang: Language of tweets to search in.
# :param int limit: Limit search results. Might get really big and slow so this should help.
# :return: Pandas dataframe with all search results (tweets).
# :rtype: pd.DataFrame
# """

# def process_tweets(tweets):
# """
# Cleans up tweets and returns dataframe with the
# following structure

# * fullname
# * id
# * likes
# * replies
# * retweets
# * text
# * timestamp
# * url
# * user
# * date
# * has_link

# :param list tweets: List of dicts of tweets data.
# :return: Returns dataframe with all the scraped tweets (no index).
# :rtype: pd.DataFrame
# """

# # 1. Clean up.
# data = []

# for t in tweets:
# d = t.__dict__
# del d["html"]
# data.append(d)

# # 2. Create dataframe
# df = pd.DataFrame(data)
# # import pdb

# # pdb.set_trace()
# df["date"] = df["timestamp"].dt.date
# df["has_link"] = df["text"].apply(
# lambda text: "http://" in text or "https://" in text
# )

# return df

# try:
# _ = query_tweets
# except NameError:
# raise Exception("Twitter extension is not installed - see README file.")

# tweets = query_tweets(
# query=" OR ".join(kw_list), begindate=self.start, lang=lang, limit=limit
# )

# return process_tweets(tweets)

def fetch_news(self, symbol, limit=10):
"""
Fetches news of the given symbol. Each news contains
Expand Down
2 changes: 1 addition & 1 deletion karpet/meta.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.4.2"
__version__ = "0.4.4"
Loading

0 comments on commit 3481e63

Please sign in to comment.