MetaMUI Migration Guide
Step-by-step guidance for classical→PQC transitions
This guide provides practical steps for migrating from classical to post-quantum cryptography using MetaMUI’s recommended suite, ensuring smooth transitions while maintaining security and performance.
Migration Overview
Three-Phase Strategy
Phase 1: Infrastructure Preparation (6-12 months)
- Goal: Prepare systems for larger keys and signatures
- Risk: Low - no cryptographic changes
- Effort: Medium - infrastructure scaling
Phase 2: Hybrid Deployment (12-24 months)
- Goal: Run classical and PQC algorithms simultaneously
- Risk: Medium - increased complexity
- Effort: High - dual algorithm support
Phase 3: PQC Migration (6-12 months)
- Goal: Complete transition to post-quantum algorithms
- Risk: Low - proven hybrid operation
- Effort: Medium - remove classical algorithms
Phase 1: Infrastructure Preparation
System Requirements Assessment
Storage Requirements
Classical Storage Needs:
├── Private Keys: 32-64 bytes per key
├── Public Keys: 32-64 bytes per key
├── Signatures: 64 bytes per signature
└── Total Overhead: ~160 bytes per identity
Post-Quantum Storage Needs:
├── Private Keys: 2,400 bytes per ML-KEM + 1,280 bytes per Falcon-512
├── Public Keys: 1,184 bytes per ML-KEM + 897 bytes per Falcon-512
├── Signatures: 690 bytes per Falcon-512 signature
└── Total Overhead: ~5,500 bytes per identity (~35x increase)
Action Items:
- Audit current storage capacity and usage patterns
- Plan storage expansion (recommend 50x capacity for safety margin)
- Implement key rotation and archival policies
- Test storage performance with larger key sizes
Network Capacity Planning
Classical Network Usage:
├── Key Exchange: 32 bytes public key
├── Signature Transmission: 64 bytes per signature
├── Handshake Overhead: ~200 bytes total
└── Daily Overhead (1000 operations): ~100 KB
Post-Quantum Network Usage:
├── Key Exchange: 1,184 bytes public key + 1,088 bytes ciphertext
├── Signature Transmission: 690 bytes per signature
├── Handshake Overhead: ~2,500 bytes total
└── Daily Overhead (1000 operations): ~1.2 MB
Action Items:
- Monitor current network utilization patterns
- Plan for 10-15x increase in cryptographic traffic
- Implement signature compression for bulk operations
- Optimize caching strategies for frequently-used keys
Compute Resource Planning
Performance Impact Analysis:
├── CPU Usage: +25% during hybrid phase, +10% final PQC
├── Memory Usage: +100% for full PQC key storage
├── I/O Impact: Increased due to larger key/signature sizes
└── Battery Impact: Falcon-512 actually more efficient for mobile
Action Items:
- Benchmark current CPU utilization during peak operations
- Plan compute scaling for hybrid operation phase
- Test mobile battery impact with PQC algorithm implementations
- Optimize memory allocation patterns for larger keys
Infrastructure Preparation Checklist
Database Schema Updates
-- Extend key storage for PQC algorithms
ALTER TABLE user_keys ADD COLUMN pqc_private_key BLOB(4096);
ALTER TABLE user_keys ADD COLUMN pqc_public_key BLOB(2048);
ALTER TABLE signatures ADD COLUMN pqc_signature BLOB(1024);
-- Add algorithm identifier columns
ALTER TABLE user_keys ADD COLUMN key_algorithm VARCHAR(32);
ALTER TABLE signatures ADD COLUMN signature_algorithm VARCHAR(32);
-- Indexes for algorithm-specific queries
CREATE INDEX idx_key_algorithm ON user_keys(key_algorithm);
CREATE INDEX idx_signature_algorithm ON signatures(signature_algorithm);
API Schema Updates
{
"extended_key_format": {
"classical": {
"private_key": "32 bytes base64",
"public_key": "32 bytes base64"
},
"post_quantum": {
"private_key": "2400 bytes base64",
"public_key": "1184 bytes base64"
},
"algorithm": "string identifier"
},
"signature_format": {
"classical": "64 bytes base64",
"post_quantum": "690 bytes base64",
"algorithm": "string identifier"
}
}
Configuration Management
# metamui-config.yml
cryptographic_algorithms:
phase: "preparation" # preparation, hybrid, post_quantum
classical:
key_exchange: "x25519"
signatures: "sr25519"
aead: "chacha20_poly1305"
hashing: "blake3"
post_quantum:
key_exchange: "ml_kem_768"
signatures: "falcon_512"
aead: "chacha20_poly1305" # unchanged
hashing: "blake3" # unchanged
storage:
key_compression: true
signature_compression: true
cache_size: "1GB"
performance:
parallel_verification: true
batch_operations: true
mobile_optimization: true
Phase 2: Hybrid Deployment
Dual Algorithm Implementation
Key Generation Strategy
class HybridKeyManager:
def __init__(self):
self.classical_suite = ClassicalSuite()
self.pq_suite = PostQuantumSuite()
def generate_hybrid_keypair(self, identity):
"""Generate both classical and PQC key pairs"""
# Classical key generation
classical_private = self.classical_suite.key_exchange.x25519_generate_private()
classical_public = self.classical_suite.key_exchange.x25519_public_key(classical_private)
# Post-quantum key generation
pq_private, pq_public = self.pq_suite.key_exchange.ml_kem_768_keygen()
return {
'identity': identity,
'classical': {
'private': classical_private,
'public': classical_public,
'algorithm': 'x25519'
},
'post_quantum': {
'private': pq_private,
'public': pq_public,
'algorithm': 'ml_kem_768'
}
}
def generate_hybrid_signing_keypair(self, identity):
"""Generate both classical and PQC signing key pairs"""
# Classical signing keys
classical_private = self.classical_suite.signatures.sr25519_generate_private()
classical_public = self.classical_suite.signatures.sr25519_public_key(classical_private)
# Post-quantum signing keys
pq_private, pq_public = self.pq_suite.signatures.falcon_512_keygen()
return {
'identity': identity,
'classical': {
'private': classical_private,
'public': classical_public,
'algorithm': 'sr25519'
},
'post_quantum': {
'private': pq_private,
'public': pq_public,
'algorithm': 'falcon_512'
}
}
Signature Strategy
class HybridSignatureManager:
def __init__(self):
self.classical_suite = ClassicalSuite()
self.pq_suite = PostQuantumSuite()
def hybrid_sign(self, message, classical_key, pq_key):
"""Create signatures with both algorithms"""
classical_sig = self.classical_suite.signatures.sr25519_sign(
message, classical_key
)
pq_sig = self.pq_suite.signatures.falcon_512_sign(
message, pq_key
)
return {
'message_hash': self.classical_suite.hashing.blake3(message),
'signatures': {
'classical': {
'signature': classical_sig,
'algorithm': 'sr25519'
},
'post_quantum': {
'signature': pq_sig,
'algorithm': 'falcon_512'
}
}
}
def hybrid_verify(self, message, signatures, classical_pubkey, pq_pubkey):
"""Verify both signatures (both must pass)"""
classical_valid = self.classical_suite.signatures.sr25519_verify(
message, signatures['classical']['signature'], classical_pubkey
)
pq_valid = self.pq_suite.signatures.falcon_512_verify(
message, signatures['post_quantum']['signature'], pq_pubkey
)
# Both signatures must be valid for hybrid verification to pass
return classical_valid and pq_valid
Key Exchange Strategy
class HybridKeyExchange:
def __init__(self):
self.classical_suite = ClassicalSuite()
self.pq_suite = PostQuantumSuite()
self.kdf = self.classical_suite.kdf
def hybrid_key_exchange(self, peer_classical_pubkey, peer_pq_pubkey,
local_classical_privkey, local_pq_privkey):
"""Perform both classical and PQC key exchange"""
# Classical key exchange
classical_shared = self.classical_suite.key_exchange.x25519(
local_classical_privkey, peer_classical_pubkey
)
# Post-quantum key exchange
pq_ciphertext, pq_shared = self.pq_suite.key_exchange.ml_kem_768_encapsulate(
peer_pq_pubkey
)
# Combine shared secrets using KDF
combined_shared = self.kdf.blake3_derive(
classical_shared + pq_shared,
salt=b"hybrid-kex-v1",
info=b"classical-pqc-combination",
length=32
)
return {
'shared_secret': combined_shared,
'pq_ciphertext': pq_ciphertext, # Send to peer for decapsulation
'algorithm': 'hybrid-x25519-mlkem768'
}
Gradual Rollout Strategy
Service-by-Service Migration
class MigrationManager:
def __init__(self):
self.migration_config = self.load_migration_config()
self.classical_suite = ClassicalSuite()
self.pq_suite = PostQuantumSuite()
self.hybrid_manager = HybridManager()
def get_algorithm_for_service(self, service_name, operation_type):
"""Determine which algorithm to use based on migration phase"""
service_config = self.migration_config.get(service_name, {})
phase = service_config.get('phase', 'classical')
if phase == 'classical':
return self.classical_suite
elif phase == 'hybrid':
return self.hybrid_manager
elif phase == 'post_quantum':
return self.pq_suite
else:
raise ValueError(f"Unknown migration phase: {phase}")
def migrate_service(self, service_name, target_phase):
"""Migrate a specific service to target phase"""
current_phase = self.migration_config[service_name]['phase']
if current_phase == 'classical' and target_phase == 'hybrid':
self._migrate_to_hybrid(service_name)
elif current_phase == 'hybrid' and target_phase == 'post_quantum':
self._migrate_to_pqc(service_name)
else:
raise ValueError(f"Invalid migration path: {current_phase} -> {target_phase}")
def _migrate_to_hybrid(self, service_name):
"""Migrate service from classical to hybrid mode"""
# Generate PQC keys for all existing identities
identities = self.get_service_identities(service_name)
for identity in identities:
self._generate_pqc_keys(identity)
# Update service configuration
self.migration_config[service_name]['phase'] = 'hybrid'
self.save_migration_config()
def _migrate_to_pqc(self, service_name):
"""Migrate service from hybrid to full PQC"""
# Verify all operations work with PQC-only
self._verify_pqc_compatibility(service_name)
# Update service configuration
self.migration_config[service_name]['phase'] = 'post_quantum'
self.save_migration_config()
# Schedule classical key cleanup (after transition period)
self._schedule_classical_key_cleanup(service_name)
Migration Configuration Example
# migration-config.yml
services:
user_authentication:
phase: "hybrid"
start_date: "2024-03-01"
target_date: "2024-09-01"
rollback_plan: "immediate_classical"
transaction_signing:
phase: "classical"
start_date: "2024-06-01"
target_date: "2024-12-01"
rollback_plan: "gradual_rollback"
inter_service_communication:
phase: "preparation"
start_date: "2024-09-01"
target_date: "2025-03-01"
rollback_plan: "immediate_classical"
migration_parameters:
batch_size: 1000 # identities to migrate per batch
verification_period: "7 days" # verification period before next batch
rollback_threshold: "5%" # error rate triggering rollback
performance_threshold: "20%" # acceptable performance degradation
Phase 3: PQC Migration
Classical Algorithm Removal
Gradual Key Rotation
class PQCMigrationManager:
def __init__(self):
self.pq_suite = PostQuantumSuite()
self.migration_state = self.load_migration_state()
def rotate_to_pqc_only(self, identity_batch):
"""Rotate identities to PQC-only operation"""
for identity in identity_batch:
# Verify PQC keys are operational
if not self._verify_pqc_keys(identity):
raise MigrationError(f"PQC keys not ready for {identity}")
# Update identity to PQC-only mode
self._set_identity_mode(identity, 'post_quantum')
# Schedule classical key removal (after grace period)
self._schedule_classical_cleanup(identity)
def _verify_pqc_keys(self, identity):
"""Verify PQC keys work for all required operations"""
try:
# Test signing and verification
test_message = b"migration-test-message"
signature = self.pq_suite.signatures.falcon_512_sign(
test_message, identity.pq_signing_key
)
valid = self.pq_suite.signatures.falcon_512_verify(
test_message, signature, identity.pq_public_key
)
if not valid:
return False
# Test key exchange
test_ciphertext, test_shared = self.pq_suite.key_exchange.ml_kem_768_encapsulate(
identity.pq_kex_public_key
)
decap_shared = self.pq_suite.key_exchange.ml_kem_768_decapsulate(
test_ciphertext, identity.pq_kex_private_key
)
return test_shared == decap_shared
except Exception as e:
self.log_migration_error(identity, f"PQC verification failed: {e}")
return False
def cleanup_classical_keys(self, identity, grace_period_expired=True):
"""Remove classical keys after grace period"""
if not grace_period_expired:
raise MigrationError("Grace period not expired, cannot cleanup classical keys")
# Verify identity is operating successfully with PQC-only
if not self._verify_pqc_only_operation(identity):
raise MigrationError("Identity not ready for classical key cleanup")
# Remove classical keys
self._remove_classical_keys(identity)
# Update migration state
self.migration_state[identity]['classical_keys_removed'] = True
self.save_migration_state()
Final Migration Verification
Comprehensive Testing
class MigrationVerification:
def __init__(self):
self.pq_suite = PostQuantumSuite()
self.test_vectors = self.load_test_vectors()
def verify_complete_migration(self, service_name):
"""Comprehensive verification of PQC-only operation"""
results = {
'service': service_name,
'tests': {},
'performance': {},
'security': {}
}
# Functional tests
results['tests']['signing'] = self._test_signing_operations()
results['tests']['key_exchange'] = self._test_key_exchange_operations()
results['tests']['encryption'] = self._test_encryption_operations()
results['tests']['hashing'] = self._test_hashing_operations()
# Performance tests
results['performance']['throughput'] = self._measure_throughput()
results['performance']['latency'] = self._measure_latency()
results['performance']['memory_usage'] = self._measure_memory_usage()
# Security verification
results['security']['constant_time'] = self._verify_constant_time()
results['security']['side_channel'] = self._verify_side_channel_resistance()
results['security']['test_vectors'] = self._verify_test_vectors()
return results
def _test_signing_operations(self):
"""Test Falcon-512 signing comprehensive functionality"""
try:
# Generate test key pair
private_key, public_key = self.pq_suite.signatures.falcon_512_keygen()
# Test various message sizes
test_messages = [
b"short",
b"medium_length_message_for_testing",
b"very_long_message_" * 100,
b"", # empty message
b"\x00" * 1024 # binary data
]
for message in test_messages:
signature = self.pq_suite.signatures.falcon_512_sign(message, private_key)
valid = self.pq_suite.signatures.falcon_512_verify(message, signature, public_key)
if not valid:
return {'status': 'failed', 'message': f'Verification failed for message length {len(message)}'}
return {'status': 'passed', 'tests': len(test_messages)}
except Exception as e:
return {'status': 'error', 'error': str(e)}
Migration Monitoring and Rollback
Monitoring Strategy
Key Performance Indicators
class MigrationMonitor:
def __init__(self):
self.metrics = MetricsCollector()
self.alerts = AlertManager()
def monitor_migration_health(self):
"""Monitor migration progress and health"""
return {
'performance': {
'cpu_usage_increase': self.metrics.get_cpu_usage_delta(),
'memory_usage_increase': self.metrics.get_memory_usage_delta(),
'operation_latency_increase': self.metrics.get_latency_delta(),
'throughput_decrease': self.metrics.get_throughput_delta()
},
'reliability': {
'error_rate': self.metrics.get_error_rate(),
'success_rate': self.metrics.get_success_rate(),
'timeout_rate': self.metrics.get_timeout_rate()
},
'resource_usage': {
'storage_utilization': self.metrics.get_storage_usage(),
'network_bandwidth': self.metrics.get_network_usage(),
'battery_impact': self.metrics.get_battery_usage()
}
}
def check_rollback_conditions(self):
"""Check if rollback conditions are met"""
health = self.monitor_migration_health()
rollback_triggers = [
health['reliability']['error_rate'] > 0.05, # 5% error rate
health['performance']['cpu_usage_increase'] > 0.50, # 50% CPU increase
health['performance']['throughput_decrease'] > 0.30, # 30% throughput loss
health['reliability']['success_rate'] < 0.95 # Less than 95% success
]
if any(rollback_triggers):
self.alerts.trigger_rollback_alert(health)
return True
return False
Rollback Procedures
class RollbackManager:
def __init__(self):
self.backup_manager = BackupManager()
self.classical_suite = ClassicalSuite()
def emergency_rollback(self, service_name, target_phase='classical'):
"""Emergency rollback to previous stable state"""
try:
# Stop new PQC operations
self._pause_pqc_operations(service_name)
# Restore classical key operations
self._restore_classical_operations(service_name)
# Verify classical operation functionality
if not self._verify_classical_operation(service_name):
raise RollbackError("Classical operation verification failed")
# Update service configuration
self._update_service_phase(service_name, target_phase)
# Resume service with classical algorithms
self._resume_service(service_name)
return {'status': 'success', 'rolled_back_to': target_phase}
except Exception as e:
self.alerts.critical_rollback_failure(service_name, str(e))
return {'status': 'failed', 'error': str(e)}
def gradual_rollback(self, service_name, rollback_percentage=10):
"""Gradually rollback percentage of operations to classical"""
# Identify operations to rollback
operations_to_rollback = self._select_rollback_operations(
service_name, rollback_percentage
)
# Configure hybrid mode with reduced PQC usage
self._configure_partial_rollback(service_name, operations_to_rollback)
# Monitor rollback effectiveness
return self._monitor_rollback_progress(service_name, operations_to_rollback)
Best Practices and Common Pitfalls
Migration Best Practices
1. Comprehensive Testing
- Test all code paths: Ensure both happy path and error conditions work with PQC
- Load testing: Verify performance under realistic load conditions
- Compatibility testing: Ensure interoperability between classical and PQC systems
- Security testing: Verify constant-time operations and side-channel resistance
2. Gradual Rollout
- Start with non-critical services: Begin migration with services that have minimal business impact
- Batch processing: Migrate identities in small batches with verification between batches
- Rollback readiness: Always maintain ability to rollback to previous state
- Monitoring: Continuous monitoring of key performance and reliability metrics
3. Key Management
- Secure key generation: Use proper entropy sources for PQC key generation
- Key rotation: Implement proper key rotation schedules for larger PQC keys
- Backup and recovery: Ensure key backup systems handle larger PQC key sizes
- Grace periods: Maintain classical keys during transition periods for rollback capability
Common Pitfalls to Avoid
1. Insufficient Resource Planning
# ❌ Don't underestimate resource requirements
# Assuming same storage/network capacity will work
# ✅ Plan for significant resource increases
storage_multiplier = 35 # 35x storage increase for keys
network_multiplier = 15 # 15x network overhead increase
memory_multiplier = 2 # 2x memory for hybrid operations
2. Incomplete Testing
# ❌ Don't test only happy paths
def inadequate_test():
# Only tests successful operations
signature = falcon_512_sign(message, key)
assert falcon_512_verify(message, signature, public_key)
# ✅ Test error conditions and edge cases
def comprehensive_test():
# Test successful operations
signature = falcon_512_sign(message, key)
assert falcon_512_verify(message, signature, public_key)
# Test invalid signatures
invalid_sig = signature[:-1] + b'\x00'
assert not falcon_512_verify(message, invalid_sig, public_key)
# Test wrong public keys
wrong_key = generate_different_public_key()
assert not falcon_512_verify(message, signature, wrong_key)
# Test empty/malformed inputs
assert not falcon_512_verify(b'', signature, public_key)
3. Missing Rollback Plans
# ❌ Don't migrate without rollback capability
def dangerous_migration():
# Remove classical keys immediately
delete_classical_keys(identity)
# No way to rollback if PQC fails
# ✅ Maintain rollback capability during transition
def safe_migration():
# Keep classical keys during grace period
set_primary_algorithm(identity, 'post_quantum')
schedule_classical_cleanup(identity, grace_period=30_days)
# Can rollback to classical if needed
Timeline and Milestones
Recommended Migration Timeline
Months 1-6: Infrastructure Preparation
- Storage capacity expansion (50x current capacity)
- Network bandwidth analysis and expansion planning
- Database schema updates for PQC key storage
- API schema extensions for algorithm identification
- Development environment PQC testing setup
- Staff training on PQC algorithms and migration procedures
Months 7-12: Hybrid Development
- Hybrid algorithm implementation and testing
- Comprehensive test suite development
- Performance benchmarking and optimization
- Security analysis and verification
- Rollback procedure development and testing
- Migration monitoring system implementation
Months 13-18: Gradual Rollout
- Pilot service migration (lowest risk service)
- Monitoring and optimization based on pilot results
- Secondary service migration
- Performance optimization and resource scaling
- Staff training on operational procedures
- Customer communication and documentation
Months 19-24: Full PQC Migration
- Critical service migration to hybrid mode
- Full PQC migration for pilot services
- Classical key cleanup for completed migrations
- Final performance optimization
- Security audit of full PQC implementation
- Documentation and knowledge transfer completion
Related Documentation
- Classical Suite - Current algorithms and implementation
- Post-Quantum Suite - Target PQC algorithms
- Performance Analysis - Performance impact assessment
- Algorithm Specifications - Technical algorithm details