Technical Name |
CoVerify Vision: Real-Time Trusted Media Shield |
Project Operator |
National Yang Ming Chiao Tung University |
Project Host |
許志仲 |
Summary |
CoVerify Vision is a next-gen verification framework pioneering "Human-AI Bi-directional Augmentation." It solves critical AI bottlenecks like real-world compression and novel attacks. The technology fuses robust AI perception with a breakthrough human-cognitive loop. We quantify human expert intuition via cognitive science, feeding these 'clues' back to the AI. This creates a self-evolving, symbiotic system that fundamentally overcomes real-world detection challenges. |
Scientific Breakthrough |
We present CoVerify Vision, an adaptive deepfake verification framework integrating Laplacian-regularized GCN, unimodal-to-multimodal contrastive learning (UMCL), and a human-in-the-loop cognitive feedback loop. The system addresses real-world challenges including compression artifacts, adversarial forgeries, and lack of explainability. Its scientific novelty is under peer review at TPAMI and IJCV, reflecting its strong potential for high-impact contributions. |
Industrial Applicability |
CoVerify Vision is piloted with National Central University's Frontier Tech Research Center and several agencies for fact-checking and media forensics. Since 2023 it has aided CNA, The Reporter, FTV, and TFC. Work with NVIDIA and PHISON speeds VLM multimodal fusion on edge devices. A three-track plan—software license, cloud API, turnkey projects—drives commercialization, while its human-AI loop counters election disinformation and deepfake fraud. |
Keyword |
Deepfake Misinformation Fake news Fact-check Human AI interatction Embedding system Robust AI Neural language processing Anti-fraud Adversarial defense |