Public and private blockchain for decentralized digital building twins and building automation system

arXiv:2604.16534v1 Announce Type: cross Abstract: The communication protocols and data transfer mechanisms employed by IoT devices in smart buildings and corresponding digital twin systems predominantly rely on centralized architectures. Such centralized systems are vulnerable to single points of failure, where a malfunction can disrupt operational processes. This study introduces a blockchain-based decentralized protocol to enhance […]

Why Training-Free Token Reduction Collapses: The Inherent Instability of Pairwise Scoring Signals

arXiv:2604.16745v1 Announce Type: new Abstract: Training-free token reduction methods for Vision Transformers (ToMe, ToFu, PiToMe, and MCTF) employ different scoring mechanisms, yet they share a closely matched cliff-like collapse at high compression. This paper explains emphwhy. We develop a diagnostic framework with two tools, ranking consistency $rho_s$ and off-diagonal correlation $rho_textoff$, that decomposes the collapse […]

An Interpretable Framework Applying Protein Words to Predict Protein-Small Molecule Complementary Pairing Rules

arXiv:2604.16550v1 Announce Type: cross Abstract: Despite the high accuracy of ‘black box’ deep learning models, drug discovery still relies on protein-ligand interaction principles and heuristics. To improve interpretability of protein-small molecule binding predictions, we developed the PWRules framework, which applies binding affinity data to identify privileged small molecule fragments and subsequently defines complementary pairing rules […]

Curriculum-RLAIF: Curriculum Alignment with Reinforcement Learning from AI Feedback

arXiv:2505.20075v2 Announce Type: replace Abstract: Reward models trained through Reinforcement Learning from AI Feedback (RLAIF) methods frequently suffer from limited generalizability, which hinders the alignment performance of policy models. This challenge stems from various issues, including distribution shift, preference label noise, and mismatch of overly challenging samples with model capacity. In this paper, we aim […]

In Search of Lost DNA Sequence Pretraining

arXiv:2604.16570v1 Announce Type: cross Abstract: DNA sequence encoding is fundamental to gene function prediction, protein synthesis, and diverse downstream biological tasks. Despite the substantial progress achieved by large-scale DNA sequence pretraining, existing studies have overwhelmingly emphasized pretraining scale and custom downstream evaluation datasets, while neglecting some essential components of the pretraining paradigm. In this paper, […]

POLAR: Online Learning for LoRA Adapter Caching and Routing in Edge LLM Serving

arXiv:2604.16583v1 Announce Type: cross Abstract: Edge deployment of large language models (LLMs) increasingly relies on libraries of lightweight LoRA adapters, yet GPU/DRAM can keep only a small resident subset at a time. Serving a request through a non-resident adapter requires paging its weights from storage, incurring measurable latency. This creates a two-timescale online control problem: […]

Tape: A Cellular Automata Benchmark for Evaluating Rule-Shift Generalization in Reinforcement Learning

arXiv:2601.04695v2 Announce Type: replace Abstract: Out-of-distribution generalization in reinforcement learning is hard to diagnose when benchmark shifts mix dynamics, observations, goals, and rewards. We address this with Tape, a controlled benchmark that isolates latent rule-shift in dynamics while keeping the observation-action interface fixed. The protocol combines deterministic splits, 20-seed replication, bootstrap uncertainty reporting, and continuous […]

Randomized Antipodal Search Done Right for Data Pareto Improvement of LLM Unlearning

arXiv:2604.16591v1 Announce Type: cross Abstract: Large language models (LLMs) sometimes memorize undesirable knowledge, which must be removed after deployment. Prior work on machine unlearning has focused largely on optimization methods that adjust parameters to enforce forgetting while preserving retention. However, these approaches assume that the forget and retain sets are readily available, which rarely holds […]

Know When to Trust the Skill: Delayed Appraisal and Epistemic Vigilance for Single-Agent LLMs

arXiv:2604.16753v1 Announce Type: new Abstract: As large language models (LLMs) transition into autonomous agents integrated with extensive tool ecosystems, traditional routing heuristics increasingly succumb to context pollution and “overthinking”. We argue that the bottleneck is not a deficit in algorithmic capability or skill diversity, but the absence of disciplined second-order metacognitive governance. In this paper, […]

Cross-Modal Bayesian Low-Rank Adaptation for Uncertainty-Aware Multimodal Learning

arXiv:2604.16657v1 Announce Type: cross Abstract: Large pre-trained language models are increasingly adapted to downstream tasks using parameter-efficient fine-tuning (PEFT), but existing PEFT methods are typically deterministic and unimodal, making them poorly suited for low-resource multimodal settings where predictive uncertainty and cross-modal reliability both matter. We introduce CALIBER (Context-Aware Low-rank Inference with Bayesian Embedding Regularization), a […]

EmergentBridge: Improving Zero-Shot Cross-Modal Transfer in Unified Multimodal Embedding Models

arXiv:2604.11043v3 Announce Type: replace Abstract: Unified multimodal embedding spaces underpin practical applications such as cross-modal retrieval and zero-shot recognition. In many real deployments, however, supervision is available only for a small subset of modality pairs (e.g., image–text), leaving emphunpaired modality pairs (e.g., audio$leftrightarrow$depth, infrared$leftrightarrow$audio) weakly connected and thus performing poorly on zero-shot transfer. Addressing this […]

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registration number 16808844