跳转到内容

Generation

此内容尚不支持你的语言。

temperature

Temperature is applied by dividing by temperature right before softmax. Higher temperature leads to more diversed output. However, do note that temperature does not change the magnitude orders of the probabilities, hence zero impact on decoding when do_sample=False.

transformers/generation/logits_process.py
1
class TemperatureLogitsWarper(LogitsProcessor):
2
def __init__(self, temperature: float):
3
if not isinstance(temperature, float) or not (temperature > 0):
4
except_msg = (
5
f"`temperature` (={temperature}) has to be a strictly positive float, otherwise your next token "
6
"scores will be invalid."
7
)
8
if isinstance(temperature, float) and temperature == 0.0:
9
except_msg += " If you're looking for greedy decoding strategies, set `do_sample=False`."
10
raise ValueError(except_msg)
11
12
self.temperature = temperature
13
14
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
15
scores_processed = scores / self.temperature
16
return scores_processed

top_p

Top_p keeps the first few most probable candidates that the sum of their probabilities exceeds top_p. The rest are masked with a filter value such as NaN so that they don’t get picked.

top_k

Top_k simply keeps the top_k most probable candidates. The rest are masked with a filter value such as NaN so that they don’t get picked.

repetition_penalty

Tokens that already occur, their probabilities got scaled down by dividing by self.penalty.

Different Decoding Strategies

The model selects the token with the highest conditional probability in each step during generation.

huggingface blog

In each time step, kk most probable candidates are kept to the next time step where the model generates the next token for kk different sequences, after which the most probable sequence overall is chosen.

huggingface blog

As illustrated from the graph above, in time step 1, the sequence (“The”, “dog”) and (“The”, “nice”) are reserved. In time step 2, the model generates next token for both sequences and the new sequence (“The”, “dog”, “has”) is selected because its overall probability is higher: 0.40.9>0.50.40.4*0.9 > 0.5 * 0.4.

In open-ended generation, a couple of reasons have been brought forward why beam search might not be the best possible option:

  • Beam search can work very well in tasks where the length of the desired generation is more or less predictable as in machine translation or summarization - see Murray et al. (2018) and Yang et al. (2018). But this is not the case for open-ended generation where the desired output length can vary greatly, e.g. dialog and story generation.
  • We have seen that beam search heavily suffers from repetitive generation. This is especially hard to control with n-gram- or other penalties in story generation since finding a good trade-off between inhibiting repetition and repeating cycles of identical n-grams requires a lot of finetuning.
  • As argued in Ari Holtzman et al. (2019), high quality human language does not follow a distribution of high probability next words. In other words, as humans, we want generated text to surprise us and not to be boring/predictable. The authors show this nicely by plotting the probability, a model would give to human text vs. what beam search does.

Quantization

An example to load a model in 4bit using NF4 quantization below with double quantization with the compute dtype bfloat16 for faster training:

1
from transformers import BitsAndBytesConfig
2
3
nf4_config = BitsAndBytesConfig(
4
load_in_4bit=True,
5
bnb_4bit_quant_type="nf4",
6
bnb_4bit_use_double_quant=True,
7
bnb_4bit_compute_dtype=torch.bfloat16
8
)
9
10
model_nf4 = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=nf4_config)

sampling

stopping criteria

1
from transformers import AutoTokenizer, AutoModelForCausalLM, \
2
GenerationConfig, BitsAndBytesConfig, StoppingCriteria, \
3
TextStreamer, pipeline
4
import torch
5
6
7
class GenerateSqlStoppingCriteria(StoppingCriteria):
8
9
def __call__(self, input_ids, scores, **kwargs):
10
# stops when sequence "```\n" is generated
11
# Baichuan2 tokenizer
12
# ``` -> 84
13
# \n -> 5
14
return (
15
len(input_ids[0]) > 1
16
and input_ids[0][-1] == 5
17
and input_ids[0][-2] == 84
18
)
19
def __len__(self):
20
return 1
21
22
def __iter__(self):
23
yield self
24
25
26
model_id = "baichuan-inc/Baichuan2-13B-chat"
27
tokenizer = AutoTokenizer.from_pretrained(
28
model_id,
29
use_fast=False,
30
trust_remote_code=True,
31
revision="v2.0"
32
)
33
quantization_config = BitsAndBytesConfig(
34
load_in_4bit=True,
35
bnb_4bit_compute_dtype=torch.bfloat16,
36
)
37
model = AutoModelForCausalLM.from_pretrained(
38
model_id,
39
device_map="auto",
40
quantization_config=quantization_config,
41
trust_remote_code=True,
42
)
43
model.generation_config = GenerationConfig.from_pretrained(model_id, revision="v2.0")
44
streamer = TextStreamer(tokenizer, skip_prompt=True,)
45
pipeline = pipeline(
46
"text-generation",
47
model=model,
48
tokenizer=tokenizer,
49
revision="v2.0",
50
do_sample=False,
51
num_return_sequences=1,
52
eos_token_id=tokenizer.eos_token_id,
53
stopping_criteria=GenerateSqlStoppingCriteria(),
54
streamer=streamer,
55
)

Speed Up Generation

speculative decoding

In speculative decoding, a small but competent draft model is used to start generating tokens. The base model(big one) is used to examine the output tokens. Tokens are accepted according to the log probabilities. The base model is then used again to actually generate tokens starting from the rejected token. It can be mathematically proved that the probability distribution is the same as if it was just the base model the whole time. Hence no performance loss while a significant speed up in generation is obtained.

Recent approaches include LLMLingua which speeds up inference by compressing prompt and KV cache with minimal performance loss, and Medusa which achieves the same performance as speculative sample by attaching and training multiple medusa heads, hence eliminating the need for small yet competent draft models. Note that Medusa indeed requires extra training.