Project

# Title Team Members TA Documents Sponsor
23 Portable RAW Reconstruction Accelerator for Legacy CCD Imaging
Guyan Wang
Yuhong Chen
other1.docx
# **RFA: Portable RAW Reconstruction Accelerator for Legacy CCD Imaging**

Group Member: Guyan Wang, Yuhong Chen

## **1\. Problem Statement**

**The "Glass-Silicon Gap":** Many legacy digital cameras (circa 2000-2010) are equipped with premium optics (Leica, Zeiss, high-grade Nikon/Canon glass) that outresolve their internal processing pipelines. While the optical pathway is high-fidelity, the final image quality is bottlenecked by:

- **Obsolete Signal Chains:** Early-stage Analogue-to-Digital Converters (ADCs) and readout circuits introduce significant read noise and pattern noise.
- **Destructive Processing:** In-camera JPEGs destroy dynamic range and detail. Even legacy RAW files are often processed with rudimentary demosaicing algorithms that fail to distinguish high-frequency texture from sensor noise.
- **Usability Void:** Users seeking the unique "CCD look" are forced to rely on cumbersome desktop post-processing workflows (e.g., Lightroom, Topaz), preventing a portable, shoot-to-share experience.

## **2\. Solution Overview**

**The "Digital Back" External Accelerator:** We propose a standalone, handheld hardware device-a "smart reconstruction box"-that interfaces physically with legacy CCD cameras. Instead of relying on the camera's internal image processor, this device ingests the raw sensor data (CCD RAW) and applies a hybrid reconstruction pipeline.

The core innovation is a **Hardware-Oriented Hybrid Pipeline**:

- **Classical Signal Processing:** Handles deterministic error correction (black level subtraction, gain normalization, hot pixel mapping).
- **Learned Estimator (AI):** A lightweight Convolutional Neural Network (CNN) or Vision Transformer model optimized for microcontroller inference (TinyML). This model does not "hallucinate" new details but acts as a probabilistic estimator to separate signal from stochastic noise based on the physics of CCD sensor characteristics.

The device will feature a touchscreen interface for file selection and "film simulation" style filter application, targeting an output quality perceptually comparable to a modern full-frame sensor (e.g., Sony A7 III) in terms of dynamic range recovery and noise floor.

## **3\. Solution Components**

### **Component A: The Compute Core (Embedded Host)**

- **MCU:** STMicroelectronics **STM32H7 Series** (e.g., STM32H747/H757).
- _Rationale:_ Dual-core architecture (Cortex-M7 + M4) allows separation of UI logic and heavy DSP operations. The Chrom-ART Accelerator helps with display handling, while the high clock speed supports the computationally intensive reconstruction algorithms.
- **Memory:** External SDRAM/HyperRAM expansion (essential for buffering full-resolution RAW files, e.g., 10MP-24MP) and high-speed QSPI Flash for AI model weight storage.

### **Component B: Connectivity & Data Ingestion Interface**

- **Physical I/O:** USB OTG (On-The-Go) Host port.
- _Function:_ The device acts as a USB Host, mounting the camera (or the camera's card reader) as a Mass Storage Device to pull RAW files (.CR2, .NEF, .RAF, .DNG).
- **Storage:** On-board MicroSD card slot for saving processed/reconstructed JPEGs or TIFFs.

### **Component C: Hybrid Reconstruction Algorithm**

- **Stage 1 (DSP):** Linearization, dark frame subtraction (optional calibration), and white balance gain application.
- **Stage 2 (NPU/AI):** A quantization-aware trained model (likely TFLite for Microcontrollers or STM32-AI) trained specifically on _noisy CCD -to- clean CMOS_ image pairs.
- _Task:_ Joint Demosaicing and Denoising (JDD).
- **Stage 3 (Color):** Application of specific "Film Looks" (LUTs) selected by the user via the UI.

### **Component D: Human-Machine Interface (HMI)**

- **Display:** 2.8" to 3.5" Capacitive Touchscreen (SPI or MIPI DSI interface).
- **GUI Stack:** TouchGFX or LVGL.
- _Workflow:_ User plugs in camera -> Device scans for RAWs -> User selects thumbnails -> User chooses "Filter/Profile" -> Device processes and saves to SD card.

## **4\. Criterion for Success**

To be considered successful, the prototype must meet the following benchmarks:

- **Quality Parity:** The output image, when blind-tested against the same scene shot on a modern CMOS sensor (Sony A7 III class), must show statistically insignificant differences in perceived noise at ISO 400-800 equivalent.
- **Edge Preservation:** The AI reconstruction must demonstrate a reduction in color moiré and false-color artifacts compared to standard bilinear demosaicing, without "smoothing" genuine texture (measured via MTF charts).
- **Latency:** Total processing time for a 10-megapixel RAW file must be under **15 seconds** on the STM32 hardware.
- **Universal RAW Support:** Successful parsing and decoding of at least two major legacy formats (e.g., Nikon .NEF from D200 era and Canon .CR2 from 5D Classic era).

## **5\. Alternatives**

- **Desktop Post-Processing (Software Only):**
- _Pros:_ Infinite computing power, established tools (DxO PureRAW), highly customized.
- _Cons:_ Destroys the portability of the photography experience; cannot be done "in the field." Need to be proficient with parameters inside the software, which requires self-training and tutoring (not user-friendly).
- **Smartphone App (via USB-C dongle):**
- _Pros:_ Powerful processors (Snapdragon/A-Series), high-res screens, easy to use.
- _Cons:_ Lack of low-level control over USB mass storage protocols for obscure legacy cameras; high friction in file management; operating system overhead prevents bare-metal optimization of the signal pipeline; unique algorithms may not be suitable for legacy cameras.
- **FPGA Implementation (Zynq/Cyclone):**
- _Pros:_ Parallel processing could make reconstruction instant.
- _Cons:_ Significantly higher complexity, cost, and power consumption compared to an STM32 implementation; higher barrier to entry for a "mini project."

Backpack Buddy - Wearable Proximity/Incident Detection for Nighttime Safety

Jeric Cuasay, Emily Grob, Rahul Kajjam

Backpack Buddy - Wearable Proximity/Incident Detection for Nighttime Safety

Featured Project

# Backpack Buddy

Team Members:

- Student 1 (cuasay2)

- Student 2 (rkajjam2)

- Student 3 (eegrob2)

# Problem

The UIUC campus is relatively a safe place. We have emergency buttons throughout campus and security personnel available regularly. However, crime still occurs and affects students walking alone, especially at night. Staying up late at night working in a classroom or other building can lead to a long scary walk home. Especially when the weather is colder, the streets are generally less populated and walking home at night can feel more dangerous due to the isolation.

# Solution

A wearable system that uses night vision camera sensor and machine learning/intelligence image processing techniques to detect pedestrians approaching the user at an abnormal speed or angle that may be out of sight. The system would vibrate to alert them to look around and check their surroundings.

# Solution Components

## Subsystem 1 - Processing

Processing

Broadcom BCM2711 SoC with a 64-bit quad-core ARM Cortex-A72 processor or potentially an internal microprocessor such as the LPC15xx series for image processing and voltage step-down to various sensors and actuators

## Subsystem 2 - Power

Power

Converts external battery power to required voltage demands of on-system chips

## Subsystem 3 - Sensors

Sensors

Camera - Night Vision Camera Adjustable-Focus Module 5MP OV5647 to detect objects in the dark

Proximity sensor - detects obstacle distance before turning camera on, potentially ultrasonic or passive infrared sensors such as the HC-SR04

Haptic feedback - Vibrating Mini Motor Disc [ADA1201] to alert user something was identified

# Criterion For Success

The Backpack Buddy will provide an image based solution for identifying any imposing figure within the user's blind spots to help ensure the safety of our user. Our solution is unique as there currently no wearable visual monitoring solutions for night-time safety.

potential stuff:

Potentially: GNSS for location tracking, light sensor for outdoors identification, and heartbeat for user stress levels

camera stabilization

heat camera

Project Videos