Study Case · Work in Progress

The control layer for autonomous AI systems

FOFOCA is ThinkNEO's first physical-world study case — an open project built on a 100% NVIDIA stack. Every AI decision the robot makes flows through the ThinkNEO Enterprise AI Control Plane. Not a product for sale. A proof that ThinkNEO can govern a robot you build — and give you full safety and observability over it.

Open Project 100% NVIDIA Stack NVIDIA Inception Program Nemotron Ultra · Nano Governed by ThinkNEO
FOFOCA Robot

An open robot — governed by ThinkNEO, powered by NVIDIA.

FOFOCA (Fully Operational Feline-free Omniscient Companion Assistant) is an open, non-commercial household robot that operates 24/7 in a real residential environment. It monitors pets, receives deliveries, detects emergencies, and — when needed — calls SAMU, fire brigade, or police autonomously.

It is not a product we sell. It's a proof — a public, open study case — that ThinkNEO can take a robot you built yourself and give it the same enterprise-grade governance a Fortune-500 agent pipeline gets: runtime guardrails, full observability, cost attribution, and an immutable audit trail for every physical decision.

What makes it different isn't the hardware. It's that every single AI decision the robot makes — every reflex, every reasoning step, every action taken in the physical world — passes through a single enforcement layer before reaching any model or actuator. That layer is ThinkNEO. The entire AI stack runs on NVIDIA: Nemotron Ultra, Nemotron Nano, Jetson-class compute, and NVIDIA NIM microservices — end to end.

Built on Raspberry Pi 5 — 8 GB.

The brain of FOFOCA is a Raspberry Pi 5 with 8 GB of RAM — a credit-card-sized computer powerful enough to run real-time computer vision, speech processing, and autonomous navigation. Every AI call is orchestrated locally before being routed through the ThinkNEO Control Plane.

Raspberry Pi 5
Raspberry Pi Raspberry Pi 5 8 GB RAM

Quad-core Arm Cortex-A76 running at 2.4 GHz. Handles OpenCV, YOLOv8 inference, ROS2 navigation, and Piper TTS — all simultaneously. Connected to Insta360 for 360° vision, a 6-DOF robotic arm, and tracked locomotion via ESP32.

CPU: 4x Cortex-A76 @ 2.4 GHz RAM: 8 GB LPDDR4X GPU: VideoCore VII I/O: 2x USB 3.0 · 2x USB 2.0 Net: Gigabit · Wi-Fi 5 · BT 5.0 Vision: Insta360 via USB 3.0 Arm: 6x MG90S via RP2040 PWM Tracks: ESP32 + HW130

Every AI call routes through the control plane.

FOFOCA holds exactly one API key: the ThinkNEO key. All model traffic — Nemotron Ultra for complex reasoning, Nemotron Nano for real-time reflexes, any other provider — goes through the control plane. No direct provider access. No lock-in. Zero code changes when routing shifts.

🤖
FOFOCA
Raspberry Pi 5
Camera, arm, tracks, sensors
🛡
ThinkNEO Control Plane
Guardrails · Observability
FinOps · Audit Trail · Routing
🧠
Nemotron Ultra / Nano
Reasoning + real-time reflexes
Any provider, swappable
Runtime Guardrails
Hard limits on what the robot can do — enforced before a prompt ever reaches a model. Emergency-only tool calls, action cooldowns, safe-zone constraints.
Policy Enforcement
Which model handles which decision. Reasoning goes to Ultra. Reflexes go to Nano. Fallback rules, context-aware routing, provider-agnostic by design.
Immutable Audit Trail
Every decision the robot made, every tool call, every emergency trigger — logged, timestamped, tamper-proof. When the robot calls SAMU, there's a full trace.
Real-Time FinOps
Per-task cost attribution. How much did it cost the robot to resolve a barking episode? To receive a delivery? The control plane knows, in real time.

Live data from production 24/7.

FOFOCA is generating real production telemetry — not a sandbox, not a demo. The dashboard below will stream live as the study case moves through deployment phases.

Metric
Source
What it proves
Requests / day
ThinkNEO Dashboard
Real AI volume in live home deployment
Cost per task
AI FinOps
Cost per robot action (dog monitoring, delivery intake…)
Guardrails fired
Runtime
Real-world safety interventions in physical AI
Latency by model
Observability
Nemotron Ultra vs Nano — fresh production numbers
Governed uptime
Monitoring
24/7 operation with zero governance downtime
Model swap impact
Routing
Zero code changes to the robot when routing shifts
Emergencies handled
Audit Trail
Full traceability on SAMU / fire / police triggers
Controlling what an autonomous robot is allowed to decide, what it is allowed to do, and proving it afterwards — is one of the biggest problems in AI right now.

We're building the layer that solves it.
— ThinkNEO · Physical-world governance

13 phases. Public from day one.

Mechanical base → perception → locomotion → ThinkNEO integration → audio pipeline → local server → dog module → delivery module → personal assistant → smart-home mesh → SLAM → emergency module → public dashboard. Each phase ships a concrete capability and the governance evidence behind it.

AI Models: NVIDIA Nemotron Ultra · Nemotron Nano
Inference: NVIDIA NIM microservices
Compute: Jetson-class + Raspberry Pi 5 edge
Vision: Insta360 + YOLOv8 + InsightFace
Audio: YAMNet + faster-whisper + Piper TTS
Memory: ChromaDB + PostgreSQL + MinIO
Navigation: ROS2 + SLAM
Governance: ThinkNEO Control Plane

100% NVIDIA AI stack. FOFOCA is a member of the NVIDIA Inception Program and uses Nemotron models end-to-end. The entire reasoning and reflex pipeline runs on NVIDIA infrastructure, governed by ThinkNEO.

Want to build your own FOFOCA?

FOFOCA is an open project. Below is the complete bill of materials — every board, sensor, actuator, and server component. Build yours, connect it to ThinkNEO, and get full AI governance from day one.

Compute & Control
🧠
Raspberry Pi 5 — 8 GB
Main brain. Runs OpenCV, YOLOv8, ROS2, Piper TTS. Orchestrates all AI calls via ThinkNEO.
Brain
📡
Raspberry Pi Zero 2 W
Co-processor for audio pipeline, watchdog, and offline fallback.
Brain
🎛️
RP2040 (Pico)
Precise PWM control for the 6-DOF robotic arm (6x MG90S servos).
Actuator
ESP32
Real-time track motor control, sensor polling, Bluetooth comms.
Actuator
🔌
Arduino Mega 2560
Sensor hub — I/O expansion for auxiliary sensors and peripherals.
Sensor
📟
Arduino Uno R4 WiFi
Auxiliary interface — status display, lightweight tasks.
Sensor
Sensors & Perception
📷
Insta360 Camera
360° vision — navigation, object detection, facial recognition, security surveillance.
Sensor
🎤
Microphone (ReSpeaker / USB)
Audio input — voice commands, dog bark detection (YAMNet), speech-to-text (faster-whisper).
Sensor
📺
ESP8266 + OLED Display
Visual status panel — shows robot state, battery level, current task.
Sensor
🛡️
Keystudio IO Expander v5.0
I/O expansion board + Sensor Shield GVS for auxiliary sensors.
Sensor
Locomotion & Actuators
🦾
Robotic Arm — 6x MG90S + Gripper
6 degrees of freedom. Picks up packages, opens doors, manipulates objects.
Actuator
🛤️
Tank Tracks (Tractor-style)
Stable all-terrain locomotion. Driven by DC motors via HW130 controller.
Actuator
🔧
HW130 Motor Driver
H-bridge motor controller for tank track DC motors. Controlled by ESP32.
Actuator
⚙️
Microstep Driver
Precision stepper motor control for fine movements.
Actuator
🔊
Bluetooth Speaker
Audio output — FOFOCA’s voice (Piper TTS), alerts, announcements.
Comms
Power System
🔋
Zircon 12V 6Ah+ Battery
Main power source. Feeds all modules through independent regulators.
Power
🔻
5x LM2596 Buck Converters
Independent voltage regulation: Pi5 (5V/3A), Arm (5V/3A), Servos (5V/2A), USB (5V/2A), Aux (5V/1A).
Power
📱
SIM800L / SIM7600 GSM Module
Emergency calls — SAMU (192), Fire (193), Police (190). Autonomous dialing.
Emergency
Local Server
🖥️
Seu PC Gamer
Any gaming PC works. Runs long-term memory, APIs, databases. Ubuntu Server or Windows WSL2.
Server
🗄️
ChromaDB + PostgreSQL + MinIO
Semantic memory (vectors), event history (SQL), video storage (object store).
Server
🌐
FastAPI + Mosquitto MQTT
REST API for Pi 5 ↔ R210 communication. MQTT for smart home integration.
Server
📊
Grafana + Ollama
Monitoring dashboards and local LLM inference for offline fallback.
Server
Software Stack
🛡️
ThinkNEO Control Plane
Single API key. Routes all AI calls. Guardrails, observability, FinOps, audit trail.
AI Gov
👁️
OpenCV + YOLOv8n + InsightFace
Computer vision — object detection, facial recognition via Insta360.
Vision
🗣️
YAMNet + faster-whisper + Piper TTS
Sound classification (barking), speech-to-text, text-to-speech.
Audio
🗺️
ROS2 + SLAM
Autonomous navigation and real-time environment mapping.
Nav
🧠
Nemotron Ultra + Nano
Ultra for complex reasoning and emergencies. Nano for real-time reflexes. Via ThinkNEO.
AI
📱
Flutter Mobile App
Remote control, live camera feed, push notifications, manual override.
App

FOFOCA is an open study case. You can replicate the entire project using off-the-shelf components and a single ThinkNEO API key. All AI governance — guardrails, observability, cost control, audit trail — comes built in.

Get a ThinkNEO API Key →