The control layer for autonomous AI systems
FOFOCA is ThinkNEO's first physical-world study case — an open project built on a 100% NVIDIA stack. Every AI decision the robot makes flows through the ThinkNEO Enterprise AI Control Plane. Not a product for sale. A proof that ThinkNEO can govern a robot you build — and give you full safety and observability over it.
// What it is
An open robot — governed by ThinkNEO, powered by NVIDIA.
FOFOCA (Fully Operational Feline-free Omniscient Companion Assistant) is an open, non-commercial household robot that operates 24/7 in a real residential environment. It monitors pets, receives deliveries, detects emergencies, and — when needed — calls SAMU, fire brigade, or police autonomously.
It is not a product we sell. It's a proof — a public, open study case — that ThinkNEO can take a robot you built yourself and give it the same enterprise-grade governance a Fortune-500 agent pipeline gets: runtime guardrails, full observability, cost attribution, and an immutable audit trail for every physical decision.
What makes it different isn't the hardware. It's that every single AI decision the robot makes — every reflex, every reasoning step, every action taken in the physical world — passes through a single enforcement layer before reaching any model or actuator. That layer is ThinkNEO. The entire AI stack runs on NVIDIA: Nemotron Ultra, Nemotron Nano, Jetson-class compute, and NVIDIA NIM microservices — end to end.
// Core hardware
Built on Raspberry Pi 5 — 8 GB.
The brain of FOFOCA is a Raspberry Pi 5 with 8 GB of RAM — a credit-card-sized computer powerful enough to run real-time computer vision, speech processing, and autonomous navigation. Every AI call is orchestrated locally before being routed through the ThinkNEO Control Plane.
Raspberry Pi 5
8 GB RAM
Quad-core Arm Cortex-A76 running at 2.4 GHz. Handles OpenCV, YOLOv8 inference, ROS2 navigation, and Piper TTS — all simultaneously. Connected to Insta360 for 360° vision, a 6-DOF robotic arm, and tracked locomotion via ESP32.
// Governance architecture
Every AI call routes through the control plane.
FOFOCA holds exactly one API key: the ThinkNEO key. All model traffic — Nemotron Ultra for complex reasoning, Nemotron Nano for real-time reflexes, any other provider — goes through the control plane. No direct provider access. No lock-in. Zero code changes when routing shifts.
Camera, arm, tracks, sensors
FinOps · Audit Trail · Routing
Any provider, swappable
// Study case metrics
Live data from production 24/7.
FOFOCA is generating real production telemetry — not a sandbox, not a demo. The dashboard below will stream live as the study case moves through deployment phases.
Controlling what an autonomous robot is allowed to decide, what it is allowed to do, and proving it afterwards — is one of the biggest problems in AI right now.
We're building the layer that solves it.
// Development roadmap
13 phases. Public from day one.
Mechanical base → perception → locomotion → ThinkNEO integration → audio pipeline → local server → dog module → delivery module → personal assistant → smart-home mesh → SLAM → emergency module → public dashboard. Each phase ships a concrete capability and the governance evidence behind it.
100% NVIDIA AI stack. FOFOCA is a member of the NVIDIA Inception Program and uses Nemotron models end-to-end. The entire reasoning and reflex pipeline runs on NVIDIA infrastructure, governed by ThinkNEO.
// Open hardware
Want to build your own FOFOCA?
FOFOCA is an open project. Below is the complete bill of materials — every board, sensor, actuator, and server component. Build yours, connect it to ThinkNEO, and get full AI governance from day one.
FOFOCA is an open study case. You can replicate the entire project using off-the-shelf components and a single ThinkNEO API key. All AI governance — guardrails, observability, cost control, audit trail — comes built in.
Get a ThinkNEO API Key →