Skip to content

Commit

Permalink
chore(deps): remove the pandas extra (#10132)
Browse files Browse the repository at this point in the history
  • Loading branch information
cpcloud committed Sep 15, 2024
1 parent 8a260a8 commit 9c8aea1
Show file tree
Hide file tree
Showing 9 changed files with 13 additions and 29 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ Ibis broadly supports two types of backend:
1. SQL-generating backends
2. DataFrame-generating backends

![Ibis backend types](https://raw.githubusercontent.com/ibis-project/ibis/main/docs/images/backends.png)
![Ibis backend types](./docs/images/backends.png)

## Portability

Expand Down

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions docs/concepts/internals.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ The internals are designed to map the Ibis API to the backend.
1. Backend specific rewrites
1. Expressions are compiled
1. The SQL string that generated by the compiler is sent to the database and
executed (this step is skipped for the pandas backend)
1. The database returns some data that is then turned into a pandas DataFrame
by Ibis
executed (this step is skipped for the polars backend)
1. The database returns some data that is then turned into an in-memory format
such as a pandas DataFrame

## Expressions

Expand Down
Binary file modified docs/images/backends.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/posts/ffill-and-bfill-using-ibis/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ This was heavily inspired by Gil Forsyth's writeup on ffill and bfill on the
### Setup

First, we want to make some mock data.
To demonstrate this technique in a non-pandas backend, we will use the DuckDB backend.
To demonstrate this technique we will use the DuckDB backend.

Our data will have measurements by date, and these measurements will be grouped by an event id.
We will then save this data to `data.parquet` so we can register that parquet file as a table in our DuckDB connector.
Expand Down
1 change: 0 additions & 1 deletion ibis/backends/tests/test_temporal.py
Original file line number Diff line number Diff line change
Expand Up @@ -1323,7 +1323,6 @@ def test_day_of_week_column_group_by(
.rename(columns={"timestamp_col": "day_of_week_result"})
)

# FIXME(#1536): Pandas backend should use query.schema().apply_to
backend.assert_frame_equal(result, expected, check_dtype=False)


Expand Down
2 changes: 1 addition & 1 deletion ibis/formats/pandas.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ def infer_table(cls, df):
for column_name in df.dtypes.keys():
if not isinstance(column_name, str):
raise TypeError(
"Column names must be strings to use the pandas backend"
"Column names must be strings to ingest a pandas DataFrame"
)

pandas_column = df[column_name]
Expand Down
3 changes: 1 addition & 2 deletions poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

17 changes: 1 addition & 16 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -194,15 +194,6 @@ oracle = [
"pandas",
"rich",
]
pandas = [
"regex",
"packaging",
"pyarrow",
"pyarrow-hotfix",
"numpy",
"pandas",
"rich",
]
polars = [
"polars",
"packaging",
Expand Down Expand Up @@ -313,18 +304,14 @@ filterwarnings = [
# pandas 1.5.x
"ignore:iteritems is deprecated and will be removed in a future version:FutureWarning",
'ignore:Passing unit-less datetime64 dtype to \.astype is deprecated:FutureWarning',
'ignore:The default value of numeric_only in DataFrameGroupBy\.sum is deprecated:FutureWarning',
# numpy
"ignore:Creating an ndarray from ragged nested sequences:",
'ignore:`np\.bool` is a deprecated alias for the builtin `bool`:DeprecationWarning',
# numpy, coming from a pandas call
'ignore:In the future `np\.bool` will be defined as the corresponding NumPy scalar:FutureWarning',
# pandas by way of polars when comparing arrays
'ignore:The truth value of an empty array is ambiguous\.:DeprecationWarning',
# druid
'ignore:Dialect druid.rest will not make use of SQL compilation caching:',
# ibis
'ignore:`(Base)?Backend.database` is deprecated:FutureWarning',
'ignore:`StructValue\.destructure` is deprecated as of v10\.0; use lift or unpack instead:FutureWarning',
# spark
"ignore:distutils Version classes are deprecated:DeprecationWarning",
Expand All @@ -347,7 +334,7 @@ filterwarnings = [
"ignore:the imp module is deprecated in favour of importlib:DeprecationWarning",
# pytest raises a syntax error when encountering this from *any* module, including third party modules
"ignore:invalid escape sequence:DeprecationWarning",
# geopandas raises usr warning on geometry column
# geopandas raises user warning on geometry column
"ignore:Geometry is in a geographic CRS",
# `is_sparse` deprecation was addressed in pyarrow 13.0.0 (see https://github.com/apache/arrow/pull/35366),
# but flink requires apache-beam<2.49, which caps its pyarrow dependency (see https://github.com/apache/beam/blob/v2.48.0/sdks/python/setup.py#L144)
Expand All @@ -366,8 +353,6 @@ filterwarnings = [
"ignore:Passing a BlockManager to DataFrame is deprecated:DeprecationWarning",
# snowpark logging warnings
"ignore:The 'warn' method is deprecated, use 'warning' instead:DeprecationWarning",
# dask and pandas backend deprecation
'ignore:The (dask|pandas) backend is slated for removal in 10\.0:DeprecationWarning',
]
empty_parameter_set_mark = "fail_at_collect"
markers = [
Expand Down

0 comments on commit 9c8aea1

Please sign in to comment.