Pipeline tutorial, summarization doesn't work

I’m doing a HF tutorial on transformers at Transformers, what can they do? · Hugging Face. It uses a Colab notebook at Google Colab . When I try to run the code for a summarizer, which begins with this:

from transformers import pipeline

summarizer = pipeline("summarization")

I get an error

 ---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
/tmp/ipykernel_13970/3730791013.py in <cell line: 0>()
      1 from transformers import pipeline
      2 
----> 3 summarizer = pipeline("summarization")
      4 summarizer(
      5     """

2 frames/usr/local/lib/python3.12/dist-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, processor, revision, use_fast, token, device, device_map, dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
    775             )
    776     else:
--> 777         normalized_task, targeted_task, task_options = check_task(task)
    778         if pipeline_class is None:
    779             pipeline_class = targeted_task["impl"]

/usr/local/lib/python3.12/dist-packages/transformers/pipelines/__init__.py in check_task(task)
    379 
    380     """
--> 381     return PIPELINE_REGISTRY.check_task(task)
    382 
    383 

/usr/local/lib/python3.12/dist-packages/transformers/pipelines/base.py in check_task(self, task)
   1354             raise KeyError(f"Invalid translation task {task}, use 'translation_XX_to_YY' format")
   1355 
-> 1356         raise KeyError(
   1357             f"Unknown task {task}, available tasks are {self.get_supported_tasks() + ['translation_XX_to_YY']}"
   1358         )

KeyError: "Unknown task summarization, available tasks are ['any-to-any', 'audio-classification', 'automatic-speech-recognition', 'depth-estimation', 'document-question-answering', 'feature-extraction', 'fill-mask', 'image-classification', 'image-feature-extraction', 'image-segmentation', 'image-text-to-text', 'image-to-image', 'keypoint-matching', 'mask-generation', 'ner', 'object-detection', 'question-answering', 'sentiment-analysis', 'table-question-answering', 'text-classification', 'text-generation', 'text-to-audio', 'text-to-speech', 'token-classification', 'video-classification', 'visual-question-answering', 'vqa', 'zero-shot-audio-classification', 'zero-shot-classification', 'zero-shot-image-classification', 'zero-shot-object-detection', 'translation_XX_to_YY']"

The pipeline no longer supports “summarization”. Is there a way to get this to work. I’m mainly curious. It appears the pipeline module may have been updated but the Colab notebook was not.

Thanks!

1 Like

Yeah. It’s real version incompatibility between Transformers v4 and v5.

If go with v5, try without pipeline:

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

model_id = "google-t5/t5-small"   # or your finetuned summarization checkpoint
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)

inputs = tokenizer(
    "summarize: " + text,
    return_tensors="pt",
    truncation=True
).input_ids

outputs = model.generate(inputs, max_new_tokens=100, do_sample=False)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(summary)

Or try another supported pipeline:

from transformers import pipeline

summarizer = pipeline("text-generation", model="Qwen/Qwen3-4B-Instruct-2507")

messages = [
    {
        "role": "user",
        "content": "Summarize the following text in 3 bullet points:\n\n" + text
    }
]

out = summarizer(messages, max_new_tokens=200)
print(out[0]["generated_text"][-1]["content"])

Thank you John, you are helpful as always. I think this tutorial is from 2022 so it makes sense something is outdated. I won’t be programming an AI myself but I still need to learn the basics.

1 Like

This worked in Colab in my own notebook.

# Fixed code from this tutorial: 
# https://huggingface.co/learn/llm-course/chapter1/3

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

model_id = "google-t5/t5-small"   # or your finetuned summarization checkpoint
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)
text =     """
    America has changed dramatically during recent years. Not only has the number of 
    graduates in traditional engineering disciplines such as mechanical, civil, 
    electrical, chemical, and aeronautical engineering declined, but in most of 
    the premier American universities engineering curricula now concentrate on 
    and encourage largely the study of engineering science. As a result, there 
    are declining offerings in engineering subjects dealing with infrastructure, 
    the environment, and related issues, and greater concentration on high 
    technology subjects, largely supporting increasingly complex scientific 
    developments. While the latter is important, it should not be at the expense 
    of more traditional engineering.

    Rapidly developing economies such as China and India, as well as other 
    industrial countries in Europe and Asia, continue to encourage and advance 
    the teaching of engineering. Both China and India, respectively, graduate 
    six and eight times as many traditional engineers as does the United States. 
    Other industrial countries at minimum maintain their output, while America 
    suffers an increasingly serious decline in the number of engineering graduates 
    and a lack of well-educated engineers.
"""
inputs = tokenizer(
    "summarize: " + text,
    return_tensors="pt",
    truncation=True
).input_ids

outputs = model.generate(inputs, max_new_tokens=100, do_sample=False)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(summary)

The program did have some warnings and did not wrap the output text “summary” but that’s fine.

1 Like