You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[ ] I checked the documentation and related resources and couldn't find an answer to my question.
Your Question
I just copied and paste quick start example code on my end everything is good except faithfulness it shows nan.
Is this normal?
Code Examples
from datasets import Dataset
import os
from ragas import evaluate
from ragas.metrics import faithfulness, answer_correctness
os.environ["OPENAI_API_KEY"] = "your-openai-key"
data_samples = {
'question': ['When was the first super bowl?', 'Who won the most super bowls?'],
'answer': ['The first superbowl was held on Jan 15, 1967', 'The most super bowls have been won by The New England Patriots'],
'contexts' : [['The First AFL–NFL World Championship Game was an American football game played on January 15, 1967, at the Los Angeles Memorial Coliseum in Los Angeles,'],
['The Green Bay Packers...Green Bay, Wisconsin.','The Packers compete...Football Conference']],
'ground_truth': ['The first superbowl was held on January 15, 1967', 'The New England Patriots have won the Super Bowl a record six times']
}
The issue of the faithfulness metric showing 'nan' in the quickstart example is likely due to no statements being generated from the answer. This can happen if the number of statements (num_statements) is zero, which results in the score being set to np.nan[1].
To resolve this, ensure that the answer is properly segmented into statements. You might need to check the sentence segmentation process to ensure it is correctly breaking down the response into simpler statements.
[ ] I checked the documentation and related resources and couldn't find an answer to my question.
Your Question
I just copied and paste quick start example code on my end everything is good except faithfulness it shows nan.
Is this normal?
Code Examples
from datasets import Dataset
import os
from ragas import evaluate
from ragas.metrics import faithfulness, answer_correctness
os.environ["OPENAI_API_KEY"] = "your-openai-key"
data_samples = {
'question': ['When was the first super bowl?', 'Who won the most super bowls?'],
'answer': ['The first superbowl was held on Jan 15, 1967', 'The most super bowls have been won by The New England Patriots'],
'contexts' : [['The First AFL–NFL World Championship Game was an American football game played on January 15, 1967, at the Los Angeles Memorial Coliseum in Los Angeles,'],
['The Green Bay Packers...Green Bay, Wisconsin.','The Packers compete...Football Conference']],
'ground_truth': ['The first superbowl was held on January 15, 1967', 'The New England Patriots have won the Super Bowl a record six times']
}
dataset = Dataset.from_dict(data_samples)
score = evaluate(dataset,metrics=[faithfulness,answer_correctness])
score.to_pandas()
Additional context
I'm using ragas 0.1.20 version
The text was updated successfully, but these errors were encountered: