Autonomous bowling pin detection and navigation for the Renesas RZ/V2N mecanum robot. Combines SLAM mapping, YOLO AI detection, and holonomic navigation in a single fullscreen GUI.
┌─────────────────────────────────────────────────────────────────┐
│ V2N Robot Control GUI │
│ ┌────────────────────────┐ ┌────────────────────────────────┐ │
│ │ │ │ │ │
│ │ SLAM Map │ │ Camera + YOLO Detection │ │
│ │ (robot, laser, │ │ (bounding boxes, distance, │ │
│ │ grid, target) │ │ angle, crosshair) │ │
│ │ │ │ │ │
│ └────────────────────────┘ └────────────────────────────────┘ │
│ [GO TO TARGET] [STOP] ● NAVIGATING 0.45m [SETTINGS] [QUIT]│
└─────────────────────────────────────────────────────────────────┘
| Guide | Description |
|---|---|
| Architecture | System architecture, block diagrams, design patterns |
| How It Works | Complete technical deep-dive: sensing, AI detection, navigation |
| GUI Guide | GUI layout, controls, settings, keyboard shortcuts |
| Hardware Setup | Hardware wiring, Arduino protocol, sensor configuration |
| API Reference | Module, class, and function reference for developers |
| Component | Requirement |
|---|---|
| Platform | Renesas RZ/V2N board (Yocto Linux) |
| ROS2 | Humble Hawksbill |
| Arduino | Motor controller on /dev/ttyACM0 (115200 baud) |
| LiDAR | RPLidar A1 on /dev/ttyUSB0 |
| Camera | USB camera on /dev/video0 (640x480) |
| Display | HDMI (Wayland/Weston) |
| Network | WiFi AP at 192.168.50.1 |
# SSH into V2N
ssh root@192.168.50.1
# Copy package (from PC)
scp -r bowling_target_nav root@192.168.50.1:~/ros2_ws/src/
# Run master setup — builds, installs service, starts GUI
cd ~/ros2_ws/src/bowling_target_nav/scripts
./v2n_master_setup.shThis single command:
- Verifies ROS2 Humble is installed
- Builds the package with
colcon build - Installs a systemd service for auto-start on boot
- Starts the fullscreen GUI with SLAM + Camera + Navigation
After setup, the robot starts automatically on boot. Manual control:
./v2n_master_setup.sh --start # Start GUI
./v2n_master_setup.sh --stop # Stop everything
./v2n_master_setup.sh --status # Check status
./v2n_master_setup.sh --build # Rebuild packagesystemctl status robot # Check status
systemctl start robot # Start
systemctl stop robot # Stop
systemctl restart robot # Restart
journalctl -u robot -f # Live logsCamera (30fps) LiDAR (10Hz) Wheel Encoders (20Hz)
│ │ │
▼ ▼ ▼
YOLO Detection Cartographer SLAM Odometry Node
(DRP-AI or ONNX) (2D occupancy grid) (odom → base_link TF)
│ │ │
└──────────┬───────────────┘ │
▼ │
Navigation Engine (20Hz) ◄────────────────────────┘
├── Vision+LiDAR fusion
├── Holonomic path planning
├── Obstacle avoidance (strafe)
├── Blind approach (dead-reckoning)
└── Multi-signal arrival detection
│
▼
/cmd_vel (Twist)
│
▼
Arduino Motor Controller
(VEL,vx,vy,wz → 4 mecanum wheels)
The system runs three threads:
- GTK Main Thread (30fps) — Renders GUI, handles user input
- ROS2 Thread (20Hz) — SLAM/LiDAR callbacks, navigation control loop
- Camera Thread (~30fps) — Frame capture, async YOLO detection
All threads communicate through a thread-safe shared state with RLock and 0.1s timeouts.
| Control | Action |
|---|---|
GO button / G key |
Start navigating to detected bowling pin |
STOP button / Space / S |
Emergency stop |
| SETTINGS button | Open 5-tab parameter tuning window |
QUIT button / Q / ESC |
Quit application |
The GUI shows:
- Left panel: SLAM map with robot position (green), LiDAR points (red), navigation target (magenta)
- Right panel: Camera feed with YOLO bounding boxes, crosshair on closest pin, distance/angle labels
- Status bar: Color-coded navigation state with speed and obstacle info
┌────────── STOP pressed ──────────────────┐
│ │
▼ GO pressed │
┌──► IDLE ◄──────────── Search timeout (30s) │
│ │ │
│ │ No target visible │
│ ▼ │
│ SEARCHING ──── Target found! ──────┐ │
│ ▲ │ │
│ │ ▼ │
│ Target lost >3s NAVIGATING ◄────────┤
│ AND far (>0.8m) │ │
│ │ │ │
│ │ Target lost >3s │
│ │ AND close (<0.8m) │
│ │ │ │
│ │ ▼ │
│ └───────────── BLIND_APPROACH │
│ │ │
│ Arrived / LiDAR stop │
│ │ │
│ ▼ │
└─────────────────── ARRIVED ───────────────────┘
(terminal)
| Backend | Description | Performance | Config Value |
|---|---|---|---|
| DRP-AI | V2N hardware accelerator | ~5-10ms inference | drp_binary |
| YOLO ONNX | CPU inference (fallback) | ~45ms inference | yolo_onnx |
| Mock | Testing without camera | Instant | mock |
# Switch backend via environment variable
export V2N_DETECTOR_TYPE=drp_binary
# Or edit config/robot_config.yamlThe system auto-detects: tries DRP-AI first (V2N hardware), falls back to ONNX CPU.
| Topic | Type | Source | Purpose |
|---|---|---|---|
/cmd_vel |
Twist | Navigation engine | Motor velocity commands |
/odom |
Odometry | odometry_node | Wheel encoder odometry |
/target_pose |
PoseStamped | vision_node | Detected pin position |
/target_detection |
String | vision_node | Detection JSON metadata |
/arduino/status |
String | arduino_driver | Connection status |
/arduino/odom_raw |
String | arduino_driver | Raw encoder telemetry |
/diagnostics |
DiagnosticArray | arduino_driver | System health |
| Topic | Type | Consumer | Purpose |
|---|---|---|---|
/cmd_vel |
Twist | arduino_driver | Motor control |
/scan |
LaserScan | Navigation, SLAM | LiDAR obstacle data |
/map |
OccupancyGrid | GUI, navigation | SLAM map |
/target_pose |
PoseStamped | target_follower | Pin position |
/gui/command |
String | MainGuiNode | GUI command dispatch |
map ──► odom ──► base_link ──► laser
(SLAM) (encoders) ├──► camera_link
└──► wheel_fl/fr/rl/rr
Control the robot wirelessly from your PC:
# Connect to V2N WiFi, then:
cd tools/
python3 pc_robot_controller.pyControls: WASD movement, Q/E rotation, Space stop, speed sliders.
bowling_target_nav/
├── bowling_target_nav/ # Main Python package
│ ├── nodes/ # 9 ROS2 entry points
│ │ └── main_gui.py # Primary: 3-thread GUI application
│ ├── state/ # Thread-safe shared state (Singleton + Facade)
│ │ ├── shared_state.py # Composes 3 domain stores
│ │ ├── sensor_store.py # Map, pose, laser
│ │ ├── detection_store.py # Camera, detections, tunable params
│ │ └── nav_store.py # Nav state, commands, obstacles
│ ├── nav/ # Navigation algorithms
│ │ ├── navigator.py # Holonomic drive, blind approach, obstacles
│ │ └── target_selector.py # Closest pin selection
│ ├── threads/ # Worker threads
│ │ ├── ros_node.py # ROS2 node + 20Hz control loop
│ │ └── camera_worker.py # Camera + async YOLO detection
│ ├── gui/ # GTK3 interface
│ │ ├── main_window.py # Fullscreen window
│ │ ├── settings_window.py # 5-tab parameter tuning
│ │ └── panels/ # Map and camera renderers
│ ├── detectors/ # AI backends (Strategy pattern)
│ │ ├── base.py # DetectorBase ABC
│ │ ├── yolo_onnx_detector.py# ONNX Runtime CPU
│ │ └── drp_binary_detector.py# DRP-AI hardware
│ ├── hardware/ # Hardware abstractions (Factory pattern)
│ │ ├── arduino.py # Motor control + encoders
│ │ ├── camera.py # Camera capture
│ │ └── lidar.py # LiDAR bridge
│ └── utils/
│ └── distance_estimator.py # Bbox size → distance
├── config/ # YAML + Lua configuration
├── launch/ # 7 ROS2 launch files
├── models/ # YOLO ONNX models
├── scripts/ # Setup and service scripts
├── tools/ # PC remote control
├── test/ # Unit + integration tests
├── urdf/ # Robot description (TF frames)
└── docs/ # Documentation
cd scripts/
./run_tests.sh # All tests
./run_tests.sh arduino # Arduino motor tests
./run_tests.sh lidar # LiDAR sensor tests
./run_tests.sh camera # Camera + YOLO tests
./run_tests.sh integration # Sensor fusion tests
./run_tests.sh system # Full system tests
./run_tests.sh --check # Check hardware availability./run_tests.sh --gui # Full system control
./run_tests.sh --visualize-lidar # LiDAR visualization
./run_tests.sh --visualize-camera # Camera detection demo
./run_tests.sh --visualize-fusion # Sensor fusion view| Host | IP | Access |
|---|---|---|
| V2N Robot | 192.168.50.1 |
ssh root@192.168.50.1 |
ls -la /dev/ttyACM0 /dev/ttyUSB0 /dev/video0./v2n_master_setup.sh --stop && ./v2n_master_setup.sh --startjournalctl -u robot -n 50 # Check logs
echo $WAYLAND_DISPLAY # Should be "wayland-0"
echo $XDG_RUNTIME_DIR # Should be "/run"./v2n_master_setup.sh --buildMIT