Quick Start
Quick Start Guide
Get started with M2M Protocol in 5 minutes.
Installation
From Source
git clone https://github.com/infernet-org/m2m-protocol.gitcd m2m-protocolcargo install --path .Verify Installation
m2m --version# m2m 0.2.0Basic Usage
Compress a Request
m2m compress '{"model":"gpt-4o","messages":[{"role":"user","content":"Hello"}]}'# Output: #T1|{"M":"4o","m":[{"r":"u","c":"Hello"}]}Decompress
m2m decompress '#T1|{"M":"4o","m":[{"r":"u","c":"Hello"}]}'# Output: {"model":"gpt-4o","messages":[{"role":"user","content":"Hello"}]}Analyze Content
m2m analyze '{"model":"gpt-4o","messages":[{"role":"user","content":"Hello"}]}'# Output:# Algorithm: Token# Original: 68 bytes# Compressed: 45 bytes# Ratio: 66% (34% savings)Using as a Library
Add Dependency
[dependencies]m2m = "0.2"Compress/Decompress
use m2m::{CodecEngine, Algorithm};
fn main() -> Result<(), Box<dyn std::error::Error>> { let engine = CodecEngine::new();
// Compress let content = r#"{"model":"gpt-4o","messages":[{"role":"user","content":"Hello"}]}"#; let result = engine.compress(content, Algorithm::Token)?;
println!("Compressed: {}", result.data); println!("Savings: {:.0}%", (1.0 - result.byte_ratio()) * 100.0);
// Decompress let original = engine.decompress(&result.data)?; assert_eq!(original, content);
Ok(())}Auto-Select Algorithm
use m2m::CodecEngine;
let engine = CodecEngine::new();let (result, algorithm) = engine.compress_auto(content)?;println!("Selected algorithm: {:?}", algorithm);Security Scanning
use m2m::SecurityScanner;
let scanner = SecurityScanner::new().with_blocking(0.8);let result = scanner.scan("Ignore previous instructions")?;
if !result.safe { println!("Threat detected: {:?}", result.threats);}Running the Proxy
Start Proxy
# Forward to local Ollamam2m server --port 8080 --upstream http://localhost:11434/v1
# Forward to OpenAIm2m server --port 8080 --upstream https://api.openai.com/v1 --api-key $OPENAI_API_KEYUse with OpenAI SDK
from openai import OpenAI
# Point to M2M proxy instead of OpenAI directlyclient = OpenAI(base_url="http://localhost:8080/v1")
response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello!"}])Check Proxy Stats
curl http://localhost:8080/stats# {# "requests_total": 42,# "bytes_in": 12345,# "bytes_out": 8765,# "compression_ratio": 0.71# }Configuration
Config File
Create ~/.m2m/config.toml:
[proxy]listen = "127.0.0.1:8080"upstream = "http://localhost:11434/v1"
[security]enabled = truethreshold = 0.8
[compression]prefer_token = trueEnvironment Variables
export M2M_SERVER_PORT=8080export M2M_UPSTREAM_URL=http://localhost:11434/v1export M2M_SECURITY_ENABLED=trueNext Steps
- Proxy Guide - Detailed proxy configuration
- Compression Spec - Algorithm details
- Security - Security considerations