Conciseness
Conciseness measures how efficiently a summary conveys information without unnecessary words or phrases. It evaluates the summary's ability to communicate key points clearly and briefly while maintaining essential information.
Overview
The conciseness metric uses an LLM provider to analyze how well a summary balances information completeness with brevity. It helps identify whether a summary is appropriately succinct while retaining crucial content from the source text.
Usage
Here's how to evaluate conciseness using Assert LLM Tools:
from assert_llm_tools.core import evaluate_summary
from assert_llm_tools.llm.config import LLMConfig
metrics = ["conciseness"]
# Configure LLM provider (choose one)
llm_config = LLMConfig(
provider="bedrock",
model_id="anthropic.claude-v2",
region="us-east-1"
)
llm_config = LLMConfig(
provider="openai",
model_id="gpt-4-mini",
api_key="your-api-key"
)
# Example texts
full_text = "The new energy policy aims to reduce carbon emissions by 30% by 2030 through implementing renewable energy solutions."
summary = "Policy targets 30% carbon reduction by 2030 via renewables."
# Evaluate conciseness
metrics = evaluate_summary(
full_text,
summary,
metrics=metrics,
llm_config=llm_config
)
# Print results
print("\nEvaluation Metrics:")
for metric, score in metrics.items():
print(f"{metric}: {score:.4f}")
Interpretation
The conciseness score ranges from 0 to 1:
- 1.0: Optimal conciseness (efficient information delivery)
- 0.0: Poor conciseness (verbose or inefficient communication)
When to Use
Use conciseness metrics when:
- Optimizing summary length
- Evaluating text compression quality
- Training summarization models
- Generating executive summaries or abstracts
- Creating content for space-constrained formats
Limitations
- Requires an LLM provider, which may incur costs
- May not account for domain-specific verbosity requirements
- Balance between detail and brevity can be subjective
- Results may vary depending on the LLM model used