Few-Shot Gesture Recognition

Powered by Hyperdimensional Computing

RECOGNITION
0 gestures

Initializing camera...

D=10,000

No gestures learned yet.

Press T to train your first gesture.

HDCHow it works

  • 1.Train gestures with 5 examples
  • 2.No neural network training
  • 3.Add new gestures incrementally
  • 4.Instant vector similarity matching

Dimensions

10,000

Features

48

What You're Seeing

This demo recognizes hand gestures using Hyperdimensional Computing (HDC)—a brain-inspired approach to machine learning that replaces neural network training with algebraic operations on high-dimensional vectors.

How it works:

  • Your hand landmarks (48 features) are encoded into a 10,000-dimensional hypervector
  • Each gesture class is represented by a single prototype vector—the "bundle" of its examples
  • Classification computes similarity between your current pose and all prototypes
  • Adding a new gesture just means creating a new prototype—no retraining required

What's different from traditional ML:

Traditional MLHDC
TrainingGradient descent (slow)Bundling (instant)
Add new classRetrain modelAdd prototype
Examples needed50-100+3-5
ComputeGPU recommendedCPU only

The bigger picture: This demo shows classification, but HDC's algebraic framework extends to compositional representations—encoding structure, sequences, and relations. That's the foundation for our work on prosthetics, robotics, and edge AI.