|
QuickLLaMA: Query-aware Inference Acceleration for Large Language Models
Jingyao Li, Han Shi, Xin Jiang, Zhenguo Li, Hong Xu, Jiaya Jia
Preprint, 2024
Paper /
Code
Our QuickLLaMA beat StreamLLM series and established new SOTA on LongBench, InfiniteBench and Needle-in-a-Haystack task.
|
|
Rethinking Out-of-Distribution Detection: Masked Image Modeling is All You Need
Jingyao Li, Pengguang Chen, Shaozuo Yu, Zexin He, Shu Liu, Jiaya Jia
CVPR, 2023
Paper /
Project Page /
Code
In this work, we find surprisingly that simply using reconstruction-based methods benefits the model in learning intrinsic data distributions of the ID dataset, which could boost the performance of OOD detection significantly.
|
|
BAL: Balancing Diversity and Novelty for Active Learning
Jingyao Li, Pengguang Chen, Shaozuo Yu, Shu Liu, Jiaya Jia
TPAMI, 2023
Paper /
Code
In this study, we harness features acquired through self-supervised learning. We introduce a straightforward yet potent metric, Cluster Distance Difference, to identify diverse data and introduce a novel framework, Balancing Active Learning (BAL).
|
|
MOODv2: Masked Image Modeling for Out-of-Distribution Detection
Jingyao Li, Pengguang Chen, Shaozuo Yu, Shu Liu, Jiaya Jia
TPAMI, 2024
Paper /
Code
In our study, we conducted a comprehensive analysis, exploring distinct pretraining tasks and employing various OOD score functions. The results highlight that the feature representations pre-trained through reconstruction yield a notable enhancement and narrow the performance gap among various score functions.
|
|
Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks
Jingyao Li, Pengguang Chen, Jiaya Jia
Preprint, 2024
Paper /
Model /
Code
We introduce a pioneering framework for Modular-of-Thought instruction tuning, designed to promote the decomposition of tasks into logical sub-tasks and sub-modules, significantly improving both the modularity and correctness of the generated solutions.
|
|
RoboCoder: Robotic Learning from Basic Skills to General Tasks with Large Language Models
Jingyao Li, Pengguang Chen, Sitong Wu, Chuanyang Zheng, Hong Xu, Jiaya Jia
Preprint, 2024
Paper
Our research develops a general-purpose robotic coding algorithm that enables robots to leverage basic skills to tackle increasingly complex tasks.
|
|
TagCLIP: Improving Discrimination Ability of Open-Vocabulary Semantic Segmentation
Jingyao Li, Pengguang Chen, Shengju Qian, Jiaya Jia
Preprint, 2023
Paper
In our work, we disentangle the ill-posed optimization problem into two parallel processes: one performs semantic matching individually, and the other judges reliability for improving discrimination ability.
|
|
VLPose: Bridging the Domain Gap in Pose Estimation with Language-Vision Tuning
Jingyao Li, Pengguang Chen, Xuan Ju, Hong Xu, Jiaya Jia
Preprint, 2023
Paper
Our research aims to bridge the domain gap between natural and artificial scenarios with efficient tuning strategies through leveraging the potential of language models.
|
SmartMore Corporation Limited
|
Computer Vision Developer
Mentor: Shu Liu
|
Aug. 2021 - Present
|
The Chinese University of Hong Kong
|
Ph.D of the Department of Computer Science and Engineering
Supervisor: Jiaya Jia
|
Aug. 2022 - Present
|
Xi'an Jiaotong University
|
Bachelor of the Department of Automation
Supervisor: Zhansheng Duan
|
Sep. 2018 - Jul. 2022
|
University of Cambridge
|
Artificial Intelligence and Industry 4.0 Project
|
Jul. 2019
|
National Scholarship, 2020
First Prize of China Undergraduate Physics Tournament, 2018
Meritorious Winner of American Mathematical Contest in Modeling, 2019
Grand Prize of Northwest Undergraduate Physics Tournament, 2018
CENG2010 | Digital Logic Design Laboratory | 2023 Spring
|
CSCI3250 | Computers and Society | 2022 Fall
|
|