Skip to content

Abbreviations

Abbreviation Tables

Complete reference for M2M Protocol Token compression mappings.

Key Abbreviations

Request Keys

OriginalAbbreviatedContext
messagesmChat messages array
contentcMessage content
rolerMessage role
modelMModel identifier
temperatureTSampling temperature
max_tokensxMaximum output tokens
top_ppNucleus sampling
streamsEnable streaming
stopSStop sequences
nnNumber of completions
seedseRandom seed
useruUser identifier
frequency_penaltyfFrequency penalty
presence_penaltyPPresence penalty
logit_biaslbToken biases
logprobslpLog probabilities
top_logprobstlpTop log probs count
response_formatrfResponse format
toolstsTool definitions
tool_choicetcTool selection
functionsfsFunction definitions
function_callfcFunction call mode

Response Keys

OriginalAbbreviatedContext
choicesCResponse choices
indexiChoice index
messagemResponse message
deltadStreaming delta
finish_reasonfrCompletion reason
usageUToken usage
prompt_tokensptInput tokens
completion_tokensctOutput tokens
total_tokensttTotal tokens
logprobslpLog probabilities
createdcrTimestamp
objectoObject type
system_fingerprintsfSystem fingerprint

Tool/Function Keys

OriginalAbbreviatedContext
tool_callstcTool call array
functionfnFunction definition
namenFunction name
argumentsaFunction arguments
typetTool type
descriptiondescTool description
parametersparamsFunction parameters
requiredreqRequired parameters
propertiespropsParameter properties

Value Abbreviations

Role Values

OriginalAbbreviated
systems
useru
assistanta
functionf
toolt

Finish Reason Values

OriginalAbbreviated
stops
lengthl
tool_callstc
content_filtercf
function_callfc

Response Format Types

OriginalAbbreviated
textt
json_objectj
json_schemajs

Model Abbreviations

OpenAI Models

OriginalAbbreviated
gpt-4o4o
gpt-4o-mini4om
gpt-4o-2024-11-204o1120
gpt-4o-2024-08-064o0806
gpt-4-turbo4t
gpt-4-turbo-preview4tp
gpt-44
gpt-4-32k432k
gpt-3.5-turbo35t
gpt-3.5-turbo-16k35t16k
o1o1
o1-minio1m
o1-previewo1p
o3o3
o3-minio3m

Meta Llama Models

OriginalAbbreviated
meta-llama/llama-3.3-70bml3370
meta-llama/llama-3.3-70b-instructml3370i
meta-llama/llama-3.1-405bml31405
meta-llama/llama-3.1-405b-instructml31405i
meta-llama/llama-3.1-70bml3170
meta-llama/llama-3.1-70b-instructml3170i
meta-llama/llama-3.1-8bml318
meta-llama/llama-3.1-8b-instructml318i

Mistral Models

OriginalAbbreviated
mistralai/mistral-largemim-l
mistralai/mistral-large-latestmim-ll
mistralai/mistral-mediummim-m
mistralai/mistral-smallmim-s
mistralai/mixtral-8x7bmimx87
mistralai/mixtral-8x22bmimx822
mistralai/codestral-latestmicodl

DeepSeek Models

OriginalAbbreviated
deepseek/deepseek-v3ddv3
deepseek/deepseek-r1ddr1
deepseek/deepseek-coderddc
deepseek/deepseek-chatddchat

Qwen Models

OriginalAbbreviated
qwen/qwen-2.5-72bqq2572
qwen/qwen-2.5-32bqq2532
qwen/qwen-2.5-coder-32bqqc32

Default Values

Parameters with these values MAY be omitted during compression.

ParameterDefault Value
temperature1.0
top_p1.0
n1
streamfalse
frequency_penalty0
presence_penalty0
logit_bias{}
stopnull
logprobsfalse

Compression Example

Original Request

{
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "You are helpful."},
{"role": "user", "content": "Hello!"}
],
"temperature": 1.0,
"max_tokens": 100,
"stream": false
}

Compressed Request

#T1|{"M":"4o","m":[{"r":"s","c":"You are helpful."},{"r":"u","c":"Hello!"}],"x":100}

Transformations Applied

  1. modelM
  2. gpt-4o4o
  3. messagesm
  4. roler
  5. systems
  6. contentc
  7. useru
  8. max_tokensx
  9. temperature: 1.0 → omitted (default)
  10. stream: false → omitted (default)