Project
| # | Title | Team Members | TA | Documents | Sponsor |
|---|---|---|---|---|---|
| 23 | Portable RAW Reconstruction Accelerator for Legacy CCD Imaging |
Guyan Wang Yuhong Chen |
other1.docx |
||
| # **RFA: Portable RAW Reconstruction Accelerator for Legacy CCD Imaging** Group Member: Guyan Wang, Yuhong Chen ## **1\. Problem Statement** **The "Glass-Silicon Gap":** Many legacy digital cameras (circa 2000-2010) are equipped with premium optics (Leica, Zeiss, high-grade Nikon/Canon glass) that outresolve their internal processing pipelines. While the optical pathway is high-fidelity, the final image quality is bottlenecked by: - **Obsolete Signal Chains:** Early-stage Analogue-to-Digital Converters (ADCs) and readout circuits introduce significant read noise and pattern noise. - **Destructive Processing:** In-camera JPEGs destroy dynamic range and detail. Even legacy RAW files are often processed with rudimentary demosaicing algorithms that fail to distinguish high-frequency texture from sensor noise. - **Usability Void:** Users seeking the unique "CCD look" are forced to rely on cumbersome desktop post-processing workflows (e.g., Lightroom, Topaz), preventing a portable, shoot-to-share experience. ## **2\. Solution Overview** **The "Digital Back" External Accelerator:** We propose a standalone, handheld hardware device-a "smart reconstruction box"-that interfaces physically with legacy CCD cameras. Instead of relying on the camera's internal image processor, this device ingests the raw sensor data (CCD RAW) and applies a hybrid reconstruction pipeline. The core innovation is a **Hardware-Oriented Hybrid Pipeline**: - **Classical Signal Processing:** Handles deterministic error correction (black level subtraction, gain normalization, hot pixel mapping). - **Learned Estimator (AI):** A lightweight Convolutional Neural Network (CNN) or Vision Transformer model optimized for microcontroller inference (TinyML). This model does not "hallucinate" new details but acts as a probabilistic estimator to separate signal from stochastic noise based on the physics of CCD sensor characteristics. The device will feature a touchscreen interface for file selection and "film simulation" style filter application, targeting an output quality perceptually comparable to a modern full-frame sensor (e.g., Sony A7 III) in terms of dynamic range recovery and noise floor. ## **3\. Solution Components** ### **Component A: The Compute Core (Embedded Host)** - **MCU:** STMicroelectronics **STM32H7 Series** (e.g., STM32H747/H757). - _Rationale:_ Dual-core architecture (Cortex-M7 + M4) allows separation of UI logic and heavy DSP operations. The Chrom-ART Accelerator helps with display handling, while the high clock speed supports the computationally intensive reconstruction algorithms. - **Memory:** External SDRAM/HyperRAM expansion (essential for buffering full-resolution RAW files, e.g., 10MP-24MP) and high-speed QSPI Flash for AI model weight storage. ### **Component B: Connectivity & Data Ingestion Interface** - **Physical I/O:** USB OTG (On-The-Go) Host port. - _Function:_ The device acts as a USB Host, mounting the camera (or the camera's card reader) as a Mass Storage Device to pull RAW files (.CR2, .NEF, .RAF, .DNG). - **Storage:** On-board MicroSD card slot for saving processed/reconstructed JPEGs or TIFFs. ### **Component C: Hybrid Reconstruction Algorithm** - **Stage 1 (DSP):** Linearization, dark frame subtraction (optional calibration), and white balance gain application. - **Stage 2 (NPU/AI):** A quantization-aware trained model (likely TFLite for Microcontrollers or STM32-AI) trained specifically on _noisy CCD -to- clean CMOS_ image pairs. - _Task:_ Joint Demosaicing and Denoising (JDD). - **Stage 3 (Color):** Application of specific "Film Looks" (LUTs) selected by the user via the UI. ### **Component D: Human-Machine Interface (HMI)** - **Display:** 2.8" to 3.5" Capacitive Touchscreen (SPI or MIPI DSI interface). - **GUI Stack:** TouchGFX or LVGL. - _Workflow:_ User plugs in camera -> Device scans for RAWs -> User selects thumbnails -> User chooses "Filter/Profile" -> Device processes and saves to SD card. ## **4\. Criterion for Success** To be considered successful, the prototype must meet the following benchmarks: - **Quality Parity:** The output image, when blind-tested against the same scene shot on a modern CMOS sensor (Sony A7 III class), must show statistically insignificant differences in perceived noise at ISO 400-800 equivalent. - **Edge Preservation:** The AI reconstruction must demonstrate a reduction in color moiré and false-color artifacts compared to standard bilinear demosaicing, without "smoothing" genuine texture (measured via MTF charts). - **Latency:** Total processing time for a 10-megapixel RAW file must be under **15 seconds** on the STM32 hardware. - **Universal RAW Support:** Successful parsing and decoding of at least two major legacy formats (e.g., Nikon .NEF from D200 era and Canon .CR2 from 5D Classic era). ## **5\. Alternatives** - **Desktop Post-Processing (Software Only):** - _Pros:_ Infinite computing power, established tools (DxO PureRAW), highly customized. - _Cons:_ Destroys the portability of the photography experience; cannot be done "in the field." Need to be proficient with parameters inside the software, which requires self-training and tutoring (not user-friendly). - **Smartphone App (via USB-C dongle):** - _Pros:_ Powerful processors (Snapdragon/A-Series), high-res screens, easy to use. - _Cons:_ Lack of low-level control over USB mass storage protocols for obscure legacy cameras; high friction in file management; operating system overhead prevents bare-metal optimization of the signal pipeline; unique algorithms may not be suitable for legacy cameras. - **FPGA Implementation (Zynq/Cyclone):** - _Pros:_ Parallel processing could make reconstruction instant. - _Cons:_ Significantly higher complexity, cost, and power consumption compared to an STM32 implementation; higher barrier to entry for a "mini project." |
|||||