Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Columns and DataType Not Explicitly Set on line 219 of utils.py #182

Open
CodeSmileBot opened this issue Nov 21, 2023 · 0 comments
Open

Comments

@CodeSmileBot
Copy link

Hello!

I found an AI-Specific Code smell in your project.
The smell is called: Columns and DataType Not Explicitly Set

You can find more information about it in this paper: https://dl.acm.org/doi/abs/10.1145/3522664.3528620.

According to the paper, the smell is described as follows:

Problem If the columns are not selected explicitly, it is not easy for developers to know what to expect in the downstream data schema. If the datatype is not set explicitly, it may silently continue the next step even though the input is unexpected, which may cause errors later. The same applies to other data importing scenarios.
Solution It is recommended to set the columns and DataType explicitly in data processing.
Impact Readability

Example:

### Pandas Column Selection
import pandas as pd
df = pd.read_csv('data.csv')
+ df = df[['col1', 'col2', 'col3']]

### Pandas Set DataType
import pandas as pd
- df = pd.read_csv('data.csv')
+ df = pd.read_csv('data.csv', dtype={'col1': 'str', 'col2': 'int', 'col3': 'float'})

You can find the code related to this smell in this link:

eval_file_path))
training_file_path, eval_file_path = download(DATA_DIR)
# This census data uses the value '?' for missing entries. We use
# na_values to
# find ? and set it to NaN.
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv
# .html
train_df = pd.read_csv(training_file_path, names=_CSV_COLUMNS,
na_values='?')
eval_df = pd.read_csv(eval_file_path, names=_CSV_COLUMNS, na_values='?')
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
# Split train and eval data with labels. The pop method copies and removes
# the label column from the dataframe.
train_x, train_y = train_df, train_df.pop(_LABEL_COLUMN)
eval_x, eval_y = eval_df, eval_df.pop(_LABEL_COLUMN)
# Join train_x and eval_x to normalize on overall means and standard
.

I also found instances of this smell in other files, such as:

File: https://github.com/GoogleCloudPlatform/ml-on-gcp/blob/master/example_zoo/tensorflow/models/ncf_main/official/datasets/movielens.py#L243-L253 Line: 248
File: https://github.com/GoogleCloudPlatform/ml-on-gcp/blob/master/example_zoo/tensorflow/models/ncf_main/official/datasets/movielens.py#L250-L260 Line: 255
File: https://github.com/GoogleCloudPlatform/ml-on-gcp/blob/master/example_zoo/tensorflow/models/ncf_main/official/recommendation/data_preprocessing.py#L99-L109 Line: 104
File: https://github.com/GoogleCloudPlatform/ml-on-gcp/blob/master/gce/burst-training/census-analysis.py#L58-L68 Line: 63
File: https://github.com/GoogleCloudPlatform/ml-on-gcp/blob/master/gce/burst-training/census-analysis.py#L63-L73 Line: 68
.

I hope this information is helpful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant