new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Jan 1

AnyPattern: Towards In-context Image Copy Detection

This paper explores in-context learning for image copy detection (ICD), i.e., prompting an ICD model to identify replicated images with new tampering patterns without the need for additional training. The prompts (or the contexts) are from a small set of image-replica pairs that reflect the new patterns and are used at inference time. Such in-context ICD has good realistic value, because it requires no fine-tuning and thus facilitates fast reaction against the emergence of unseen patterns. To accommodate the "seen rightarrow unseen" generalization scenario, we construct the first large-scale pattern dataset named AnyPattern, which has the largest number of tamper patterns (90 for training and 10 for testing) among all the existing ones. We benchmark AnyPattern with popular ICD methods and reveal that existing methods barely generalize to novel tamper patterns. We further propose a simple in-context ICD method named ImageStacker. ImageStacker learns to select the most representative image-replica pairs and employs them as the pattern prompts in a stacking manner (rather than the popular concatenation manner). Experimental results show (1) training with our large-scale dataset substantially benefits pattern generalization (+26.66 % mu AP), (2) the proposed ImageStacker facilitates effective in-context ICD (another round of +16.75 % mu AP), and (3) AnyPattern enables in-context ICD, i.e. without such a large-scale dataset, in-context learning does not emerge even with our ImageStacker. The project (including the proposed dataset AnyPattern and the code for ImageStacker) is publicly available at https://anypattern.github.io under the MIT Licence.

  • 4 authors
·
Apr 21, 2024

Stacking Brick by Brick: Aligned Feature Isolation for Incremental Face Forgery Detection

The rapid advancement of face forgery techniques has introduced a growing variety of forgeries. Incremental Face Forgery Detection (IFFD), involving gradually adding new forgery data to fine-tune the previously trained model, has been introduced as a promising strategy to deal with evolving forgery methods. However, a naively trained IFFD model is prone to catastrophic forgetting when new forgeries are integrated, as treating all forgeries as a single ''Fake" class in the Real/Fake classification can cause different forgery types overriding one another, thereby resulting in the forgetting of unique characteristics from earlier tasks and limiting the model's effectiveness in learning forgery specificity and generality. In this paper, we propose to stack the latent feature distributions of previous and new tasks brick by brick, i.e., achieving aligned feature isolation. In this manner, we aim to preserve learned forgery information and accumulate new knowledge by minimizing distribution overriding, thereby mitigating catastrophic forgetting. To achieve this, we first introduce Sparse Uniform Replay (SUR) to obtain the representative subsets that could be treated as the uniformly sparse versions of the previous global distributions. We then propose a Latent-space Incremental Detector (LID) that leverages SUR data to isolate and align distributions. For evaluation, we construct a more advanced and comprehensive benchmark tailored for IFFD. The leading experimental results validate the superiority of our method.

  • 8 authors
·
Nov 18, 2024

Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training

LLMs are computationally expensive to pre-train due to their large scale. Model growth emerges as a promising approach by leveraging smaller models to accelerate the training of larger ones. However, the viability of these model growth methods in efficient LLM pre-training remains underexplored. This work identifies three critical textit{O}bstacles: (O1) lack of comprehensive evaluation, (O2) untested viability for scaling, and (O3) lack of empirical guidelines. To tackle O1, we summarize existing approaches into four atomic growth operators and systematically evaluate them in a standardized LLM pre-training setting. Our findings reveal that a depthwise stacking operator, called G_{stack}, exhibits remarkable acceleration in training, leading to decreased loss and improved overall performance on eight standard NLP benchmarks compared to strong baselines. Motivated by these promising results, we conduct extensive experiments to delve deeper into G_{stack} to address O2 and O3. For O2 (untested scalability), our study shows that G_{stack} is scalable and consistently performs well, with experiments up to 7B LLMs after growth and pre-training LLMs with 750B tokens. For example, compared to a conventionally trained 7B model using 300B tokens, our G_{stack} model converges to the same loss with 194B tokens, resulting in a 54.6\% speedup. We further address O3 (lack of empirical guidelines) by formalizing guidelines to determine growth timing and growth factor for G_{stack}, making it practical in general LLM pre-training. We also provide in-depth discussions and comprehensive ablation studies of G_{stack}. Our code and pre-trained model are available at https://llm-stacking.github.io/{https://llm-stacking.github.io/}.

  • 8 authors
·
May 24, 2024 1

GMRT observation of neutral atomic hydrogen gas in the COSMOS field at z sim 0.37

We present the results of HI spectral stacking analysis of Giant Metrewave Radio Telescope (GMRT) observations targeting the COSMOS field. The GMRT data cube contains 474 field galaxies with redshifts known from the zCOSMOS-bright 10k catalogue. Spectra for the galaxies are co-added and the stacked spectrum allows us to make a sim 3σ measurement of the average HI mass. Using this average HI mass along with the integral optical B-band luminosity of the galaxies and the luminosity density of the COSMOS field, a volume normalisation is applied to obtain the cosmic HI mass density (Ω_{rm HI}). We find a cosmic HI mass density of Ω_{rm HI} = (0.42 pm 0.16) times 10^{-3} at z sim 0.37, which is the highest-redshift measurement of Ω_{rm HI} ever made using HI spectral stacking. The value we obtained for Ω_{rm HI} at z sim 0.37 is consistent with that measured from large blind 21-cm surveys at z = 0 as well as measurements from other HI stacking experiments at lower redshifts. Our measurement in conjunction with earlier measurements indicates that there has been no significant evolution of HI gas abundance over the last 4 Gyr. A weighted mean of Ω_{rm HI} from all 21-cm measurements at redshifts z lesssim 0.4 gives Ω_{rm HI} = (0.35 pm 0.01) times 10^{-3}. The Ω_{rm HI} measured (from HI 21-cm emission measurements) at z lesssim 0.4 is however approximately half that measured from Damped Lyman-α Absorption (DLA) systems at z gtrsim 2. Deeper surveys with existing and upcoming instruments will be critical to understand the evolution of Ω_{rm HI} in the redshift range intermediate between z sim 0.4 and the range probed by DLA observations.

  • 5 authors
·
May 6, 2016